Dec 08 18:51:06 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 18:51:06 crc kubenswrapper[5004]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 18:51:06 crc kubenswrapper[5004]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 18:51:06 crc kubenswrapper[5004]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 18:51:06 crc kubenswrapper[5004]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 18:51:06 crc kubenswrapper[5004]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 18:51:06 crc kubenswrapper[5004]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.474198 5004 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480016 5004 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480104 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480114 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480119 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480123 5004 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480136 5004 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480147 5004 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480152 5004 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480158 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480163 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480169 5004 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480178 5004 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480185 5004 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480189 5004 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480194 5004 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480199 5004 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480203 5004 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480207 5004 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480215 5004 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480219 5004 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480224 5004 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480228 5004 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480232 5004 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480240 5004 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480246 5004 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480251 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480255 5004 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480260 5004 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480265 5004 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480270 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480277 5004 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480282 5004 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480286 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480291 5004 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480295 5004 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480299 5004 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480308 5004 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480313 5004 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480318 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480324 5004 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480328 5004 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480332 5004 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480342 5004 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480346 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480354 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480359 5004 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480363 5004 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480367 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480372 5004 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480376 5004 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480380 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480384 5004 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480388 5004 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480393 5004 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480397 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480404 5004 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480408 5004 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480412 5004 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480417 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480422 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480426 5004 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480431 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480435 5004 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480439 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480445 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480449 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480458 5004 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480462 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480466 5004 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480470 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480474 5004 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480480 5004 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480484 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480490 5004 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480521 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480526 5004 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480531 5004 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480536 5004 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480540 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480548 5004 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480553 5004 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480558 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480562 5004 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480566 5004 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480570 5004 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.480574 5004 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.481967 5004 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.481981 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.481986 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.481991 5004 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.481995 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482000 5004 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482004 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482008 5004 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482013 5004 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482017 5004 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482025 5004 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482029 5004 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482034 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482038 5004 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482042 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482046 5004 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482051 5004 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482056 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482061 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482065 5004 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482089 5004 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482094 5004 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482098 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482105 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482109 5004 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482113 5004 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482117 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482121 5004 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482125 5004 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482129 5004 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482133 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482137 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482142 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482146 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482150 5004 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482158 5004 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482162 5004 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482167 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482172 5004 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482176 5004 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482181 5004 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482186 5004 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482190 5004 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482194 5004 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482198 5004 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482202 5004 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482207 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482214 5004 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482218 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482222 5004 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482226 5004 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482231 5004 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482236 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482240 5004 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482244 5004 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482255 5004 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482259 5004 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482263 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482267 5004 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482271 5004 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482278 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482282 5004 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482286 5004 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482290 5004 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482294 5004 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482298 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482302 5004 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482311 5004 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482316 5004 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482320 5004 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482324 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482331 5004 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482334 5004 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482338 5004 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482342 5004 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482346 5004 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482350 5004 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482354 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482358 5004 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482362 5004 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482365 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482369 5004 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482373 5004 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482379 5004 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482387 5004 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.482391 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483192 5004 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483230 5004 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483247 5004 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483256 5004 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483265 5004 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483271 5004 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483279 5004 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483287 5004 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483296 5004 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483301 5004 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483307 5004 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483313 5004 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483318 5004 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483323 5004 flags.go:64] FLAG: --cgroup-root="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483329 5004 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483333 5004 flags.go:64] FLAG: --client-ca-file="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483343 5004 flags.go:64] FLAG: --cloud-config="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483348 5004 flags.go:64] FLAG: --cloud-provider="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483353 5004 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483364 5004 flags.go:64] FLAG: --cluster-domain="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483368 5004 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483374 5004 flags.go:64] FLAG: --config-dir="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483378 5004 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483383 5004 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483390 5004 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483398 5004 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483404 5004 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483409 5004 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483414 5004 flags.go:64] FLAG: --contention-profiling="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483419 5004 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483425 5004 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483431 5004 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483436 5004 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483454 5004 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483459 5004 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483464 5004 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483470 5004 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483475 5004 flags.go:64] FLAG: --enable-server="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483481 5004 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483496 5004 flags.go:64] FLAG: --event-burst="100" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483502 5004 flags.go:64] FLAG: --event-qps="50" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483506 5004 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483515 5004 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483520 5004 flags.go:64] FLAG: --eviction-hard="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483527 5004 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483532 5004 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483537 5004 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483543 5004 flags.go:64] FLAG: --eviction-soft="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483549 5004 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483555 5004 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483564 5004 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483570 5004 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483575 5004 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483580 5004 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483586 5004 flags.go:64] FLAG: --feature-gates="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483593 5004 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483598 5004 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483604 5004 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483612 5004 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483616 5004 flags.go:64] FLAG: --healthz-port="10248" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483623 5004 flags.go:64] FLAG: --help="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483628 5004 flags.go:64] FLAG: --hostname-override="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483633 5004 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483638 5004 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483642 5004 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483647 5004 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483658 5004 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483667 5004 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483672 5004 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483677 5004 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483682 5004 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483687 5004 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483695 5004 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483700 5004 flags.go:64] FLAG: --kube-reserved="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483705 5004 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483710 5004 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483718 5004 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483723 5004 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483727 5004 flags.go:64] FLAG: --lock-file="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483732 5004 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483738 5004 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483743 5004 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483752 5004 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483757 5004 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483765 5004 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483770 5004 flags.go:64] FLAG: --logging-format="text" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483775 5004 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483781 5004 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483786 5004 flags.go:64] FLAG: --manifest-url="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483791 5004 flags.go:64] FLAG: --manifest-url-header="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483801 5004 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483806 5004 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483822 5004 flags.go:64] FLAG: --max-pods="110" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483827 5004 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483836 5004 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483841 5004 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483846 5004 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483851 5004 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483856 5004 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483867 5004 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483894 5004 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483898 5004 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483903 5004 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483908 5004 flags.go:64] FLAG: --pod-cidr="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483913 5004 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483923 5004 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483927 5004 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483935 5004 flags.go:64] FLAG: --pods-per-core="0" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483940 5004 flags.go:64] FLAG: --port="10250" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483945 5004 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483949 5004 flags.go:64] FLAG: --provider-id="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483954 5004 flags.go:64] FLAG: --qos-reserved="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483959 5004 flags.go:64] FLAG: --read-only-port="10255" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483964 5004 flags.go:64] FLAG: --register-node="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483968 5004 flags.go:64] FLAG: --register-schedulable="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483972 5004 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483987 5004 flags.go:64] FLAG: --registry-burst="10" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483991 5004 flags.go:64] FLAG: --registry-qps="5" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.483996 5004 flags.go:64] FLAG: --reserved-cpus="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484001 5004 flags.go:64] FLAG: --reserved-memory="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484007 5004 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484012 5004 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484016 5004 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484024 5004 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484029 5004 flags.go:64] FLAG: --runonce="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484036 5004 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484045 5004 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484050 5004 flags.go:64] FLAG: --seccomp-default="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484055 5004 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484060 5004 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484065 5004 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484090 5004 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484108 5004 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484114 5004 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484119 5004 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484124 5004 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484129 5004 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484134 5004 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484138 5004 flags.go:64] FLAG: --system-cgroups="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484144 5004 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484198 5004 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484204 5004 flags.go:64] FLAG: --tls-cert-file="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484210 5004 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484646 5004 flags.go:64] FLAG: --tls-min-version="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484654 5004 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484659 5004 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484665 5004 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484671 5004 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484676 5004 flags.go:64] FLAG: --v="2" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484703 5004 flags.go:64] FLAG: --version="false" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484711 5004 flags.go:64] FLAG: --vmodule="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484719 5004 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.484724 5004 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484937 5004 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484944 5004 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484948 5004 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484952 5004 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484961 5004 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484967 5004 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484973 5004 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484979 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484985 5004 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484989 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484994 5004 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.484998 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485003 5004 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485006 5004 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485010 5004 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485014 5004 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485018 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485022 5004 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485026 5004 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485030 5004 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485036 5004 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485040 5004 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485045 5004 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485055 5004 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485065 5004 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485084 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485093 5004 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485097 5004 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485102 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485106 5004 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485110 5004 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485114 5004 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485118 5004 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485122 5004 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485126 5004 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485130 5004 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485138 5004 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485147 5004 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485151 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485155 5004 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485160 5004 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485164 5004 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485168 5004 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485172 5004 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485177 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485181 5004 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485185 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485190 5004 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485194 5004 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485199 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485203 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485207 5004 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485210 5004 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485214 5004 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485218 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485222 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485226 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485230 5004 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485235 5004 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485239 5004 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485245 5004 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485249 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485254 5004 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485259 5004 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485264 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485269 5004 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485273 5004 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485278 5004 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485284 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485292 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485296 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485300 5004 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485305 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485309 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485313 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485318 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485322 5004 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485326 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485330 5004 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485334 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485338 5004 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485342 5004 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485346 5004 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485350 5004 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485354 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.485358 5004 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.485374 5004 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.496726 5004 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.496784 5004 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496847 5004 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496859 5004 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496864 5004 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496868 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496878 5004 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496887 5004 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496893 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496898 5004 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496905 5004 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496911 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496916 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496920 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496926 5004 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496930 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496935 5004 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496940 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496944 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496949 5004 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496953 5004 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496958 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496962 5004 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496967 5004 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496972 5004 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496976 5004 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496981 5004 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496987 5004 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496992 5004 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.496996 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497000 5004 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497005 5004 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497009 5004 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497013 5004 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497017 5004 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497021 5004 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497025 5004 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497029 5004 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497034 5004 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497038 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497042 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497046 5004 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497050 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497059 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497064 5004 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497069 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497092 5004 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497097 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497101 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497106 5004 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497110 5004 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497114 5004 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497118 5004 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497122 5004 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497126 5004 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497130 5004 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497135 5004 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497139 5004 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497143 5004 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497147 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497153 5004 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497157 5004 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497161 5004 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497165 5004 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497169 5004 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497174 5004 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497179 5004 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497183 5004 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497187 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497192 5004 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497198 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497203 5004 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497207 5004 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497212 5004 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497216 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497224 5004 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497228 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497233 5004 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497237 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497241 5004 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497246 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497251 5004 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497257 5004 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497262 5004 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497266 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497269 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497274 5004 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497278 5004 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.497286 5004 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497396 5004 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497401 5004 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497405 5004 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497408 5004 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497412 5004 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497415 5004 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497419 5004 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497422 5004 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497426 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497429 5004 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497433 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497437 5004 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497440 5004 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497444 5004 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497447 5004 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497451 5004 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497454 5004 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497458 5004 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497461 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497465 5004 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497468 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497472 5004 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497476 5004 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497479 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497482 5004 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497486 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497489 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497492 5004 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497496 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497499 5004 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497502 5004 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497505 5004 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497509 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497513 5004 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497516 5004 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497519 5004 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497523 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497527 5004 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497531 5004 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497534 5004 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497538 5004 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497541 5004 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497544 5004 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497548 5004 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497551 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497554 5004 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497557 5004 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497561 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497564 5004 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497567 5004 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497571 5004 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497575 5004 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497578 5004 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497581 5004 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497587 5004 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497591 5004 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497596 5004 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497599 5004 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497602 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497606 5004 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497609 5004 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497612 5004 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497615 5004 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497619 5004 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497622 5004 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497625 5004 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497629 5004 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497632 5004 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497636 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497640 5004 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497645 5004 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497648 5004 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497651 5004 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497655 5004 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497658 5004 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497661 5004 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497665 5004 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497668 5004 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497671 5004 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497674 5004 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497678 5004 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497682 5004 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497686 5004 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497691 5004 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497695 5004 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.497699 5004 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.497706 5004 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.498083 5004 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.500567 5004 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.504380 5004 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.504589 5004 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.505264 5004 server.go:1019] "Starting client certificate rotation" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.505410 5004 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.505506 5004 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.511700 5004 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.513939 5004 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.514533 5004 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.523950 5004 log.go:25] "Validated CRI v1 runtime API" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.542683 5004 log.go:25] "Validated CRI v1 image API" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.544973 5004 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.547115 5004 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-18-45-10-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.547144 5004 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.556715 5004 manager.go:217] Machine: {Timestamp:2025-12-08 18:51:06.555649168 +0000 UTC m=+0.204557496 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25195298816 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:2a592c3d-8402-4b24-bfed-95916d7ee8fd BootID:4b514b11-7c3d-40a7-962d-40f2ee014679 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:3075598 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12597649408 Type:vfs Inodes:3075598 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039063040 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:12597649408 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:2519527424 Type:vfs Inodes:615119 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d4:b2:72 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d4:b2:72 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:5d:f6:dd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4e:1c:7e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:dc:28:92 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:22:58:71 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ce:c2:d6:52:35:05 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:96:bf:59:1e:d6:83 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:25195298816 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.556891 5004 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.557059 5004 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.558110 5004 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.558150 5004 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.558306 5004 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.558315 5004 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.558334 5004 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.558534 5004 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.558749 5004 state_mem.go:36] "Initialized new in-memory state store" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.558891 5004 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.559485 5004 kubelet.go:491] "Attempting to sync node with API server" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.559632 5004 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.559658 5004 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.559672 5004 kubelet.go:397] "Adding apiserver pod source" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.559685 5004 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.561854 5004 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.561872 5004 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.561867 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.562220 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.563066 5004 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.563103 5004 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.564396 5004 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.564604 5004 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.564943 5004 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565274 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565295 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565301 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565308 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565315 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565321 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565327 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565334 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565342 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565354 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565376 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565568 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565767 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.565782 5004 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.569774 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.576938 5004 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.577019 5004 server.go:1295] "Started kubelet" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.577910 5004 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.578237 5004 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.578375 5004 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.579008 5004 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.579020 5004 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.579476 5004 server.go:317] "Adding debug handlers to kubelet server" Dec 08 18:51:06 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.580293 5004 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.580321 5004 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.580478 5004 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.580794 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.579468 5004 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.582443 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.583233 5004 factory.go:153] Registering CRI-O factory Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.583321 5004 factory.go:223] Registration of the crio container factory successfully Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.583417 5004 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.583443 5004 factory.go:55] Registering systemd factory Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.583451 5004 factory.go:223] Registration of the systemd container factory successfully Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.583476 5004 factory.go:103] Registering Raw factory Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.583495 5004 manager.go:1196] Started watching for new ooms in manager Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.583508 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.583276 5004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f521db9c86ddb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.576973275 +0000 UTC m=+0.225881583,LastTimestamp:2025-12-08 18:51:06.576973275 +0000 UTC m=+0.225881583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.584115 5004 manager.go:319] Starting recovery of all containers Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.615242 5004 manager.go:324] Recovery completed Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.626220 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.626824 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.626914 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628334 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628365 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628379 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628397 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628415 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628467 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628486 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628504 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628518 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628531 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628549 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628568 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.628642 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636449 5004 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636523 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636546 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636560 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636572 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636606 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636620 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636633 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636646 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636659 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636674 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636688 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636700 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636720 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636733 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636746 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636758 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636771 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636786 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636798 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636811 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636823 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636835 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636847 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636859 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636872 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636883 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636896 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636912 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636924 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636937 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636951 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636965 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636979 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.636992 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637007 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637021 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637033 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637045 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637058 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637131 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637160 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637173 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637186 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637199 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637211 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637223 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637236 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637380 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637392 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637404 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637420 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637433 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637449 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637462 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637478 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637490 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637504 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637516 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637529 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637542 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637555 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637569 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637581 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637595 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637608 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637620 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637633 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637646 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637658 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637670 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637684 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637697 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637710 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637721 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637733 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637745 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637756 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637767 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637780 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637794 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637805 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637817 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637829 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637841 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637853 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637864 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637877 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637888 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637900 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637910 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637921 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637935 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637947 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637959 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637971 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.637983 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638006 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638018 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638030 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638042 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638053 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638065 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638093 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638105 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638117 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638129 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638141 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638152 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638163 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638175 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638188 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638202 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638231 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638245 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638257 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638269 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638283 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638294 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638306 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638318 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638329 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638340 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638354 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638367 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638380 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638392 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638403 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638416 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638428 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638439 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638452 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638464 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638474 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638484 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638495 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638506 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638516 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638527 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638538 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638549 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638560 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638570 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638580 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638592 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638604 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638616 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638629 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638641 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638652 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638663 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638674 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638687 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638697 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638710 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638721 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638732 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638743 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638755 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638768 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638782 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638794 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638807 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638821 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638840 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638853 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638865 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638877 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638890 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638901 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638913 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638925 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638938 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638952 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638963 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638976 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.638989 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.639001 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.639013 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.639025 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.639040 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.639065 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644151 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644208 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644228 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644243 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644256 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644270 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644284 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644299 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644312 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644325 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644343 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644359 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644376 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644389 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644404 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.641401 5004 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/ocp-clusterid.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/ocp-clusterid.service: no such file or directory Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.644464 5004 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/ocp-mco-sshkey.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/ocp-mco-sshkey.service: no such file or directory Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.642371 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.644417 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645252 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645280 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645296 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645307 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645318 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645328 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645337 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645348 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645357 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645368 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645379 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645439 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645449 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645459 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645468 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645477 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645487 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645497 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: W1208 18:51:06.645343 5004 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/ocp-userpasswords.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/ocp-userpasswords.service: no such file or directory Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645507 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645585 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645611 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645626 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645642 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645656 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645669 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645683 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645695 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645708 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645720 5004 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645736 5004 reconstruct.go:97] "Volume reconstruction finished" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.645745 5004 reconciler.go:26] "Reconciler: start to sync state" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.646242 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.646271 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.646281 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.646975 5004 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.646990 5004 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.647007 5004 state_mem.go:36] "Initialized new in-memory state store" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.681693 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.706221 5004 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.708735 5004 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.708785 5004 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.708824 5004 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.708834 5004 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.708883 5004 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.709750 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.781913 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.784708 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.809887 5004 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.882168 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.973984 5004 policy_none.go:49] "None policy: Start" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.974032 5004 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 18:51:06 crc kubenswrapper[5004]: I1208 18:51:06.974049 5004 state_mem.go:35] "Initializing new in-memory state store" Dec 08 18:51:06 crc kubenswrapper[5004]: E1208 18:51:06.982802 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.011022 5004 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.062272 5004 manager.go:341] "Starting Device Plugin manager" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.062345 5004 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.062357 5004 server.go:85] "Starting device plugin registration server" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.062829 5004 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.062846 5004 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.064520 5004 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.064624 5004 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.064631 5004 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.070527 5004 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.070606 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.163540 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.164311 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.164339 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.164351 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.164373 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.164719 5004 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.186009 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.364837 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.365887 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.365934 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.365949 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.365983 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.366600 5004 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.412060 5004 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.412246 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.413441 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.413523 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.413541 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.414777 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.414874 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.414913 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.415977 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.416036 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.416053 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.416103 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.416111 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.416120 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.417743 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.417863 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.417912 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.419207 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.419244 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.419255 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.419511 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.419534 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.419550 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.420255 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.420348 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.420389 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.421647 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.421681 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.421693 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.421655 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.421725 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.421737 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.422238 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.422461 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.422501 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.422628 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.422653 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.422666 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.423268 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.423288 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.423321 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.423297 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.423335 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.423908 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.423940 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.423954 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.460056 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.469311 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.492105 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.501869 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.507521 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559129 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559415 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559465 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559500 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559521 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559688 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559750 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559791 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559830 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559880 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.559919 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560135 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560235 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560254 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560743 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560768 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560258 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560789 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560817 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560836 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560861 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560880 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560885 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.560897 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.561012 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.561034 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.561865 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.562254 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.562337 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.562468 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.570650 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.620963 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662576 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662630 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662646 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662680 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662697 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662713 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662729 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662746 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662811 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662831 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662872 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662891 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662917 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662935 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662972 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662950 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662987 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662914 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.662938 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663022 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663123 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663122 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663191 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663123 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663162 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663174 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663176 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663199 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663147 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663236 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663238 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.663288 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.760967 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.767571 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.768549 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.768612 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.768622 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.768648 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.769177 5004 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.770470 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: W1208 18:51:07.784795 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-b551831061013d8a4329d1cd67e22847eac671d66e9265f52764ecb52fe435ab WatchSource:0}: Error finding container b551831061013d8a4329d1cd67e22847eac671d66e9265f52764ecb52fe435ab: Status 404 returned error can't find the container with id b551831061013d8a4329d1cd67e22847eac671d66e9265f52764ecb52fe435ab Dec 08 18:51:07 crc kubenswrapper[5004]: W1208 18:51:07.786327 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-f8414cc5e3c240a60b33bd7c0795324266522a40c29ee057184df0b5453e03b3 WatchSource:0}: Error finding container f8414cc5e3c240a60b33bd7c0795324266522a40c29ee057184df0b5453e03b3: Status 404 returned error can't find the container with id f8414cc5e3c240a60b33bd7c0795324266522a40c29ee057184df0b5453e03b3 Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.792095 5004 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.792752 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.799166 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.802540 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: I1208 18:51:07.808595 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:07 crc kubenswrapper[5004]: W1208 18:51:07.812416 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-90d6ac72d12faa5963dff1e475c904e2e302dca5163b122532d607821e95e749 WatchSource:0}: Error finding container 90d6ac72d12faa5963dff1e475c904e2e302dca5163b122532d607821e95e749: Status 404 returned error can't find the container with id 90d6ac72d12faa5963dff1e475c904e2e302dca5163b122532d607821e95e749 Dec 08 18:51:07 crc kubenswrapper[5004]: W1208 18:51:07.838301 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-a939cd04c82ffabb392440aa0fc1936acb5d4c89e81d9e6bc44d2b20a4bf06ed WatchSource:0}: Error finding container a939cd04c82ffabb392440aa0fc1936acb5d4c89e81d9e6bc44d2b20a4bf06ed: Status 404 returned error can't find the container with id a939cd04c82ffabb392440aa0fc1936acb5d4c89e81d9e6bc44d2b20a4bf06ed Dec 08 18:51:07 crc kubenswrapper[5004]: W1208 18:51:07.840578 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-149dad0c03ba8c4a234e05312bd1800f0894ce1a7fd1d0e102afad07b0939ae1 WatchSource:0}: Error finding container 149dad0c03ba8c4a234e05312bd1800f0894ce1a7fd1d0e102afad07b0939ae1: Status 404 returned error can't find the container with id 149dad0c03ba8c4a234e05312bd1800f0894ce1a7fd1d0e102afad07b0939ae1 Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.968863 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.987353 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Dec 08 18:51:07 crc kubenswrapper[5004]: E1208 18:51:07.989922 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.569458 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.571019 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.571030 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.571111 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.571131 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.571161 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:08 crc kubenswrapper[5004]: E1208 18:51:08.571489 5004 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.707946 5004 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:51:08 crc kubenswrapper[5004]: E1208 18:51:08.709201 5004 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.719507 5004 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc" exitCode=0 Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.719625 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.719714 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"b551831061013d8a4329d1cd67e22847eac671d66e9265f52764ecb52fe435ab"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.719855 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.720681 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.720729 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.720743 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:08 crc kubenswrapper[5004]: E1208 18:51:08.720970 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.723701 5004 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec" exitCode=0 Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.723751 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.723788 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"149dad0c03ba8c4a234e05312bd1800f0894ce1a7fd1d0e102afad07b0939ae1"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.723942 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.731293 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.731434 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.731512 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:08 crc kubenswrapper[5004]: E1208 18:51:08.731791 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.736689 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.736749 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.736763 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a939cd04c82ffabb392440aa0fc1936acb5d4c89e81d9e6bc44d2b20a4bf06ed"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.739644 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a" exitCode=0 Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.739712 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.739737 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"90d6ac72d12faa5963dff1e475c904e2e302dca5163b122532d607821e95e749"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.739869 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.742496 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.742559 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.742575 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:08 crc kubenswrapper[5004]: E1208 18:51:08.742858 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.744638 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.747929 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.747978 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.747992 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.748595 5004 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34" exitCode=0 Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.748638 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.748666 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f8414cc5e3c240a60b33bd7c0795324266522a40c29ee057184df0b5453e03b3"} Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.748852 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.750967 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.751012 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:08 crc kubenswrapper[5004]: I1208 18:51:08.751029 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:08 crc kubenswrapper[5004]: E1208 18:51:08.751322 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:09 crc kubenswrapper[5004]: E1208 18:51:09.538487 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.571050 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 08 18:51:09 crc kubenswrapper[5004]: E1208 18:51:09.588632 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.755665 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.755720 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.755762 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.758038 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.758089 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.758102 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:09 crc kubenswrapper[5004]: E1208 18:51:09.758319 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.763353 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.763388 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.763404 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.768748 5004 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823" exitCode=0 Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.768816 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.769015 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.779181 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.779224 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.779237 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:09 crc kubenswrapper[5004]: E1208 18:51:09.779485 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.789221 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.789369 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.790922 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.790965 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.790978 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:09 crc kubenswrapper[5004]: E1208 18:51:09.791256 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.808286 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.808336 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.808353 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8"} Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.808506 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.809152 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.809183 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:09 crc kubenswrapper[5004]: I1208 18:51:09.809196 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:09 crc kubenswrapper[5004]: E1208 18:51:09.809409 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.177489 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.178457 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.178496 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.178507 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.178535 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.613676 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.813151 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d9dd2c4668ac685446c5a036ac7dd8c32ddda20ac9f475d564de7c4ae208fd0d"} Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.813194 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911"} Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.813339 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.813788 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.813815 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.813828 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:10 crc kubenswrapper[5004]: E1208 18:51:10.814027 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.816253 5004 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c" exitCode=0 Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.816460 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.816703 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c"} Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.816858 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817104 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817477 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817510 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817522 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817540 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817554 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817595 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817610 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817560 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:10 crc kubenswrapper[5004]: I1208 18:51:10.817665 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:10 crc kubenswrapper[5004]: E1208 18:51:10.817874 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:10 crc kubenswrapper[5004]: E1208 18:51:10.817964 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:10 crc kubenswrapper[5004]: E1208 18:51:10.818164 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.185060 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.824475 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616"} Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.824582 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202"} Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.824597 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f"} Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.824608 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924"} Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.824615 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a"} Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.824665 5004 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.824827 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.824979 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.828156 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.828197 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.828228 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.828201 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.828254 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:11 crc kubenswrapper[5004]: I1208 18:51:11.828239 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:11 crc kubenswrapper[5004]: E1208 18:51:11.828524 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:11 crc kubenswrapper[5004]: E1208 18:51:11.828897 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.828408 5004 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.828473 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.829635 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.829704 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.829724 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:12 crc kubenswrapper[5004]: E1208 18:51:12.830415 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.909316 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.909591 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.911166 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.911262 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.911319 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:12 crc kubenswrapper[5004]: E1208 18:51:12.911678 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.975145 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.975534 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.976421 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.976450 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:12 crc kubenswrapper[5004]: I1208 18:51:12.976461 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:12 crc kubenswrapper[5004]: E1208 18:51:12.976871 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:13 crc kubenswrapper[5004]: I1208 18:51:13.013549 5004 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:51:13 crc kubenswrapper[5004]: I1208 18:51:13.569818 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:13 crc kubenswrapper[5004]: I1208 18:51:13.830629 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:13 crc kubenswrapper[5004]: I1208 18:51:13.831295 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:13 crc kubenswrapper[5004]: I1208 18:51:13.831342 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:13 crc kubenswrapper[5004]: I1208 18:51:13.831359 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:13 crc kubenswrapper[5004]: E1208 18:51:13.831759 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.169526 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.169743 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.170717 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.170754 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.170766 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:14 crc kubenswrapper[5004]: E1208 18:51:14.171102 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.744575 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.749155 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.833755 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.835531 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.835573 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:14 crc kubenswrapper[5004]: I1208 18:51:14.835581 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:14 crc kubenswrapper[5004]: E1208 18:51:14.835902 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.288148 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.288421 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.290158 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.290237 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.290255 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:15 crc kubenswrapper[5004]: E1208 18:51:15.290893 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.836463 5004 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.836523 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.837134 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.837167 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.837177 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:15 crc kubenswrapper[5004]: E1208 18:51:15.837458 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.909819 5004 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:51:15 crc kubenswrapper[5004]: I1208 18:51:15.909926 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:51:16 crc kubenswrapper[5004]: I1208 18:51:16.801263 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:16 crc kubenswrapper[5004]: I1208 18:51:16.840053 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:16 crc kubenswrapper[5004]: I1208 18:51:16.841715 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:16 crc kubenswrapper[5004]: I1208 18:51:16.841789 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:16 crc kubenswrapper[5004]: I1208 18:51:16.841815 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:16 crc kubenswrapper[5004]: E1208 18:51:16.842559 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:17 crc kubenswrapper[5004]: E1208 18:51:17.070982 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:18 crc kubenswrapper[5004]: I1208 18:51:18.662331 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 18:51:18 crc kubenswrapper[5004]: I1208 18:51:18.662892 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:18 crc kubenswrapper[5004]: I1208 18:51:18.664185 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:18 crc kubenswrapper[5004]: I1208 18:51:18.664218 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:18 crc kubenswrapper[5004]: I1208 18:51:18.664231 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:18 crc kubenswrapper[5004]: E1208 18:51:18.664664 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:20 crc kubenswrapper[5004]: I1208 18:51:20.144125 5004 trace.go:236] Trace[1583457204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 18:51:10.141) (total time: 10002ms): Dec 08 18:51:20 crc kubenswrapper[5004]: Trace[1583457204]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:51:20.143) Dec 08 18:51:20 crc kubenswrapper[5004]: Trace[1583457204]: [10.002173938s] [10.002173938s] END Dec 08 18:51:20 crc kubenswrapper[5004]: E1208 18:51:20.144162 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:20 crc kubenswrapper[5004]: E1208 18:51:20.179673 5004 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 08 18:51:20 crc kubenswrapper[5004]: I1208 18:51:20.554000 5004 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 18:51:20 crc kubenswrapper[5004]: I1208 18:51:20.554316 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 18:51:20 crc kubenswrapper[5004]: I1208 18:51:20.564773 5004 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 18:51:20 crc kubenswrapper[5004]: I1208 18:51:20.565091 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 18:51:21 crc kubenswrapper[5004]: I1208 18:51:21.192411 5004 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]log ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]etcd ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-apiextensions-informers ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/crd-informer-synced ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 18:51:21 crc kubenswrapper[5004]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 18:51:21 crc kubenswrapper[5004]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/bootstrap-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/apiservice-registration-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]autoregister-completion ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 18:51:21 crc kubenswrapper[5004]: livez check failed Dec 08 18:51:21 crc kubenswrapper[5004]: I1208 18:51:21.193846 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:51:22 crc kubenswrapper[5004]: E1208 18:51:22.789483 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 08 18:51:23 crc kubenswrapper[5004]: I1208 18:51:23.380846 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:23 crc kubenswrapper[5004]: I1208 18:51:23.381874 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:23 crc kubenswrapper[5004]: I1208 18:51:23.382035 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:23 crc kubenswrapper[5004]: I1208 18:51:23.382235 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:23 crc kubenswrapper[5004]: I1208 18:51:23.382361 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:23 crc kubenswrapper[5004]: E1208 18:51:23.389219 5004 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:51:23 crc kubenswrapper[5004]: E1208 18:51:23.426951 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:25 crc kubenswrapper[5004]: I1208 18:51:25.551455 5004 trace.go:236] Trace[1006432873]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 18:51:10.650) (total time: 14900ms): Dec 08 18:51:25 crc kubenswrapper[5004]: Trace[1006432873]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14900ms (18:51:25.551) Dec 08 18:51:25 crc kubenswrapper[5004]: Trace[1006432873]: [14.900474749s] [14.900474749s] END Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.551495 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.551423 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521db9c86ddb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.576973275 +0000 UTC m=+0.225881583,LastTimestamp:2025-12-08 18:51:06.576973275 +0000 UTC m=+0.225881583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: I1208 18:51:25.551959 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.552052 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:25 crc kubenswrapper[5004]: I1208 18:51:25.552154 5004 trace.go:236] Trace[161152266]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 18:51:10.967) (total time: 14584ms): Dec 08 18:51:25 crc kubenswrapper[5004]: Trace[161152266]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14584ms (18:51:25.552) Dec 08 18:51:25 crc kubenswrapper[5004]: Trace[161152266]: [14.584608029s] [14.584608029s] END Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.552179 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.552282 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9b11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,LastTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.556631 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9e91f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,LastTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.562629 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbdea0d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646285643 +0000 UTC m=+0.295193951,LastTimestamp:2025-12-08 18:51:06.646285643 +0000 UTC m=+0.295193951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: I1208 18:51:25.567609 5004 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.569194 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dd705bfe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:07.067531232 +0000 UTC m=+0.716439540,LastTimestamp:2025-12-08 18:51:07.067531232 +0000 UTC m=+0.716439540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.575090 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9b11a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9b11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,LastTimestamp:2025-12-08 18:51:07.16432996 +0000 UTC m=+0.813238278,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: I1208 18:51:25.575202 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.579712 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9e91f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9e91f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,LastTimestamp:2025-12-08 18:51:07.164345241 +0000 UTC m=+0.813253549,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.584395 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbdea0d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbdea0d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646285643 +0000 UTC m=+0.295193951,LastTimestamp:2025-12-08 18:51:07.164356102 +0000 UTC m=+0.813264400,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.589749 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9b11a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9b11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,LastTimestamp:2025-12-08 18:51:07.365914723 +0000 UTC m=+1.014823031,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.594723 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9e91f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9e91f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,LastTimestamp:2025-12-08 18:51:07.365941105 +0000 UTC m=+1.014849413,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.598963 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbdea0d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbdea0d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646285643 +0000 UTC m=+0.295193951,LastTimestamp:2025-12-08 18:51:07.365955347 +0000 UTC m=+1.014863655,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.606763 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9b11a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9b11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,LastTimestamp:2025-12-08 18:51:07.413507894 +0000 UTC m=+1.062416222,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.612661 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9e91f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9e91f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,LastTimestamp:2025-12-08 18:51:07.413533116 +0000 UTC m=+1.062441445,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.624034 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbdea0d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbdea0d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646285643 +0000 UTC m=+0.295193951,LastTimestamp:2025-12-08 18:51:07.413548628 +0000 UTC m=+1.062456956,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.628782 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9b11a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9b11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,LastTimestamp:2025-12-08 18:51:07.416028811 +0000 UTC m=+1.064937119,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.633738 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9b11a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9b11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,LastTimestamp:2025-12-08 18:51:07.416087566 +0000 UTC m=+1.064995874,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.638493 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9e91f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9e91f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,LastTimestamp:2025-12-08 18:51:07.416103978 +0000 UTC m=+1.065012286,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.643066 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9e91f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9e91f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,LastTimestamp:2025-12-08 18:51:07.416113168 +0000 UTC m=+1.065021476,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.647879 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbdea0d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbdea0d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646285643 +0000 UTC m=+0.295193951,LastTimestamp:2025-12-08 18:51:07.416120649 +0000 UTC m=+1.065028957,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.652118 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbdea0d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbdea0d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646285643 +0000 UTC m=+0.295193951,LastTimestamp:2025-12-08 18:51:07.4161279 +0000 UTC m=+1.065036208,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.655986 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9b11a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9b11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,LastTimestamp:2025-12-08 18:51:07.419229129 +0000 UTC m=+1.068137437,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.659868 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9e91f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9e91f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,LastTimestamp:2025-12-08 18:51:07.419250711 +0000 UTC m=+1.068159019,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.669094 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbdea0d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbdea0d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646285643 +0000 UTC m=+0.295193951,LastTimestamp:2025-12-08 18:51:07.419261422 +0000 UTC m=+1.068169730,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.675230 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9b11a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9b11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646262042 +0000 UTC m=+0.295170350,LastTimestamp:2025-12-08 18:51:07.419521945 +0000 UTC m=+1.068430253,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.681633 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f521dbde9e91f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f521dbde9e91f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:06.646276383 +0000 UTC m=+0.295184691,LastTimestamp:2025-12-08 18:51:07.419540257 +0000 UTC m=+1.068448565,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.687815 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f521e023bcc57 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:07.792493655 +0000 UTC m=+1.441401983,LastTimestamp:2025-12-08 18:51:07.792493655 +0000 UTC m=+1.441401983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.693564 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e023d9df7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:07.792612855 +0000 UTC m=+1.441521163,LastTimestamp:2025-12-08 18:51:07.792612855 +0000 UTC m=+1.441521163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.698248 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e03c57da8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:07.818294696 +0000 UTC m=+1.467203004,LastTimestamp:2025-12-08 18:51:07.818294696 +0000 UTC m=+1.467203004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.702609 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e05342380 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:07.842323328 +0000 UTC m=+1.491231636,LastTimestamp:2025-12-08 18:51:07.842323328 +0000 UTC m=+1.491231636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.708492 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e0566e25a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:07.845648986 +0000 UTC m=+1.494557294,LastTimestamp:2025-12-08 18:51:07.845648986 +0000 UTC m=+1.494557294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.713041 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e22856132 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.334186802 +0000 UTC m=+1.983095110,LastTimestamp:2025-12-08 18:51:08.334186802 +0000 UTC m=+1.983095110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.718469 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e22a53050 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.33627144 +0000 UTC m=+1.985179758,LastTimestamp:2025-12-08 18:51:08.33627144 +0000 UTC m=+1.985179758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.723156 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f521e22b35165 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.337197413 +0000 UTC m=+1.986105721,LastTimestamp:2025-12-08 18:51:08.337197413 +0000 UTC m=+1.986105721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.727498 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e22ba1f82 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.337643394 +0000 UTC m=+1.986551712,LastTimestamp:2025-12-08 18:51:08.337643394 +0000 UTC m=+1.986551712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.734381 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e22bb0631 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.337702449 +0000 UTC m=+1.986610767,LastTimestamp:2025-12-08 18:51:08.337702449 +0000 UTC m=+1.986610767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.738752 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e2349679a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.347033498 +0000 UTC m=+1.995941806,LastTimestamp:2025-12-08 18:51:08.347033498 +0000 UTC m=+1.995941806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.743785 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e2376a973 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.349999475 +0000 UTC m=+1.998907783,LastTimestamp:2025-12-08 18:51:08.349999475 +0000 UTC m=+1.998907783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.749491 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f521e23812dff openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.350688767 +0000 UTC m=+1.999597075,LastTimestamp:2025-12-08 18:51:08.350688767 +0000 UTC m=+1.999597075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.754570 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e23939f16 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.351897366 +0000 UTC m=+2.000805674,LastTimestamp:2025-12-08 18:51:08.351897366 +0000 UTC m=+2.000805674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.759270 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e23c1b5d7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.354917847 +0000 UTC m=+2.003826155,LastTimestamp:2025-12-08 18:51:08.354917847 +0000 UTC m=+2.003826155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.765200 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e23d02748 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.355864392 +0000 UTC m=+2.004772700,LastTimestamp:2025-12-08 18:51:08.355864392 +0000 UTC m=+2.004772700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.773274 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e30fcaa4c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.576885324 +0000 UTC m=+2.225793632,LastTimestamp:2025-12-08 18:51:08.576885324 +0000 UTC m=+2.225793632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.778439 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e31a1ec0e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.587715598 +0000 UTC m=+2.236623906,LastTimestamp:2025-12-08 18:51:08.587715598 +0000 UTC m=+2.236623906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.790005 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e31b3c542 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.588885314 +0000 UTC m=+2.237793622,LastTimestamp:2025-12-08 18:51:08.588885314 +0000 UTC m=+2.237793622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.796328 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f521e39a8afb7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.722376631 +0000 UTC m=+2.371284939,LastTimestamp:2025-12-08 18:51:08.722376631 +0000 UTC m=+2.371284939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.802411 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e3a6a4b79 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.735064953 +0000 UTC m=+2.383973261,LastTimestamp:2025-12-08 18:51:08.735064953 +0000 UTC m=+2.383973261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.808489 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e3af7b476 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.744332406 +0000 UTC m=+2.393240714,LastTimestamp:2025-12-08 18:51:08.744332406 +0000 UTC m=+2.393240714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.814483 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e3b729c12 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:08.75238709 +0000 UTC m=+2.401295399,LastTimestamp:2025-12-08 18:51:08.75238709 +0000 UTC m=+2.401295399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.820205 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e4d3dd50a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.050918154 +0000 UTC m=+2.699826462,LastTimestamp:2025-12-08 18:51:09.050918154 +0000 UTC m=+2.699826462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.825321 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e4e8bf8dc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.072816348 +0000 UTC m=+2.721724656,LastTimestamp:2025-12-08 18:51:09.072816348 +0000 UTC m=+2.721724656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.830334 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e4eada778 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.075023736 +0000 UTC m=+2.723932044,LastTimestamp:2025-12-08 18:51:09.075023736 +0000 UTC m=+2.723932044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.835821 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e4f084191 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.080961425 +0000 UTC m=+2.729869733,LastTimestamp:2025-12-08 18:51:09.080961425 +0000 UTC m=+2.729869733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.842207 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e515cb0f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.120049398 +0000 UTC m=+2.768957736,LastTimestamp:2025-12-08 18:51:09.120049398 +0000 UTC m=+2.768957736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.850033 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f521e5160309b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.120278683 +0000 UTC m=+2.769186991,LastTimestamp:2025-12-08 18:51:09.120278683 +0000 UTC m=+2.769186991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.855561 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e5180a332 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.12240517 +0000 UTC m=+2.771313478,LastTimestamp:2025-12-08 18:51:09.12240517 +0000 UTC m=+2.771313478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.862096 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e5284a481 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.139444865 +0000 UTC m=+2.788353173,LastTimestamp:2025-12-08 18:51:09.139444865 +0000 UTC m=+2.788353173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.867974 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f521e5303cafc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.147777788 +0000 UTC m=+2.796686096,LastTimestamp:2025-12-08 18:51:09.147777788 +0000 UTC m=+2.796686096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.872841 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e56cb95ac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.211202988 +0000 UTC m=+2.860111296,LastTimestamp:2025-12-08 18:51:09.211202988 +0000 UTC m=+2.860111296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.877271 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e5f64fdb7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.355474359 +0000 UTC m=+3.004382667,LastTimestamp:2025-12-08 18:51:09.355474359 +0000 UTC m=+3.004382667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.881365 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e609ec7f9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.376038905 +0000 UTC m=+3.024947213,LastTimestamp:2025-12-08 18:51:09.376038905 +0000 UTC m=+3.024947213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.886690 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e60b6cdd1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.377613265 +0000 UTC m=+3.026521573,LastTimestamp:2025-12-08 18:51:09.377613265 +0000 UTC m=+3.026521573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.891120 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e60b788b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.377661104 +0000 UTC m=+3.026569412,LastTimestamp:2025-12-08 18:51:09.377661104 +0000 UTC m=+3.026569412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.897256 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e60b78450 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.377659984 +0000 UTC m=+3.026568292,LastTimestamp:2025-12-08 18:51:09.377659984 +0000 UTC m=+3.026568292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.902259 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e6162fe37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.388897847 +0000 UTC m=+3.037806155,LastTimestamp:2025-12-08 18:51:09.388897847 +0000 UTC m=+3.037806155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.906998 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e616b5017 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.389443095 +0000 UTC m=+3.038351403,LastTimestamp:2025-12-08 18:51:09.389443095 +0000 UTC m=+3.038351403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: I1208 18:51:25.909757 5004 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:51:25 crc kubenswrapper[5004]: I1208 18:51:25.909829 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.912201 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e618bc74b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.391570763 +0000 UTC m=+3.040479071,LastTimestamp:2025-12-08 18:51:09.391570763 +0000 UTC m=+3.040479071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.916511 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e618ee6b5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.391775413 +0000 UTC m=+3.040683741,LastTimestamp:2025-12-08 18:51:09.391775413 +0000 UTC m=+3.040683741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.920470 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e6e4beadf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.605489375 +0000 UTC m=+3.254397693,LastTimestamp:2025-12-08 18:51:09.605489375 +0000 UTC m=+3.254397693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.924571 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f521e6f887d74 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.626236276 +0000 UTC m=+3.275144584,LastTimestamp:2025-12-08 18:51:09.626236276 +0000 UTC m=+3.275144584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.928787 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e717bb5cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.658953165 +0000 UTC m=+3.307861473,LastTimestamp:2025-12-08 18:51:09.658953165 +0000 UTC m=+3.307861473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.932673 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e7186a46b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.659669611 +0000 UTC m=+3.308577919,LastTimestamp:2025-12-08 18:51:09.659669611 +0000 UTC m=+3.308577919,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.937667 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e7215d159 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.669052761 +0000 UTC m=+3.317961069,LastTimestamp:2025-12-08 18:51:09.669052761 +0000 UTC m=+3.317961069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.943058 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e7226770f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.670143759 +0000 UTC m=+3.319052067,LastTimestamp:2025-12-08 18:51:09.670143759 +0000 UTC m=+3.319052067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.949069 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521e725a4deb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.673541099 +0000 UTC m=+3.322449417,LastTimestamp:2025-12-08 18:51:09.673541099 +0000 UTC m=+3.322449417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.954474 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e792e8fbc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.788114876 +0000 UTC m=+3.437023194,LastTimestamp:2025-12-08 18:51:09.788114876 +0000 UTC m=+3.437023194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.959868 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e8342ba52 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:09.957208658 +0000 UTC m=+3.606116966,LastTimestamp:2025-12-08 18:51:09.957208658 +0000 UTC m=+3.606116966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.964738 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e8dac55e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.131901925 +0000 UTC m=+3.780810233,LastTimestamp:2025-12-08 18:51:10.131901925 +0000 UTC m=+3.780810233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.966019 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e8dbd0d67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.132997479 +0000 UTC m=+3.781905787,LastTimestamp:2025-12-08 18:51:10.132997479 +0000 UTC m=+3.781905787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.969354 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e8e2a0fc1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.140141505 +0000 UTC m=+3.789049813,LastTimestamp:2025-12-08 18:51:10.140141505 +0000 UTC m=+3.789049813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.975954 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521e8f66b663 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.160893539 +0000 UTC m=+3.809801847,LastTimestamp:2025-12-08 18:51:10.160893539 +0000 UTC m=+3.809801847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.981945 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e9a230627 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.341006887 +0000 UTC m=+3.989915205,LastTimestamp:2025-12-08 18:51:10.341006887 +0000 UTC m=+3.989915205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.986025 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e9b0d5113 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.356361491 +0000 UTC m=+4.005269799,LastTimestamp:2025-12-08 18:51:10.356361491 +0000 UTC m=+4.005269799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.991119 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521eb6a59adf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.819326687 +0000 UTC m=+4.468234995,LastTimestamp:2025-12-08 18:51:10.819326687 +0000 UTC m=+4.468234995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:25 crc kubenswrapper[5004]: E1208 18:51:25.995795 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ec1d7b9be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.007160766 +0000 UTC m=+4.656069074,LastTimestamp:2025-12-08 18:51:11.007160766 +0000 UTC m=+4.656069074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.000564 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ec2967f66 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.019663206 +0000 UTC m=+4.668571514,LastTimestamp:2025-12-08 18:51:11.019663206 +0000 UTC m=+4.668571514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.005361 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ec2a71b71 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.020751729 +0000 UTC m=+4.669660037,LastTimestamp:2025-12-08 18:51:11.020751729 +0000 UTC m=+4.669660037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.009573 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ecbd62658 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.174829656 +0000 UTC m=+4.823737984,LastTimestamp:2025-12-08 18:51:11.174829656 +0000 UTC m=+4.823737984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.014014 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ecc85ee12 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.186349586 +0000 UTC m=+4.835257914,LastTimestamp:2025-12-08 18:51:11.186349586 +0000 UTC m=+4.835257914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.018708 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ecc94f31c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.187333916 +0000 UTC m=+4.836242244,LastTimestamp:2025-12-08 18:51:11.187333916 +0000 UTC m=+4.836242244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.023408 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ed677f6ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.353206443 +0000 UTC m=+5.002114751,LastTimestamp:2025-12-08 18:51:11.353206443 +0000 UTC m=+5.002114751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.027704 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ed720ca03 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.364270595 +0000 UTC m=+5.013178903,LastTimestamp:2025-12-08 18:51:11.364270595 +0000 UTC m=+5.013178903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.033529 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ed732b94b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.365445963 +0000 UTC m=+5.014354281,LastTimestamp:2025-12-08 18:51:11.365445963 +0000 UTC m=+5.014354281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.037847 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ee2f4d74e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.562717006 +0000 UTC m=+5.211625314,LastTimestamp:2025-12-08 18:51:11.562717006 +0000 UTC m=+5.211625314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.068290 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ee3e5fca1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.578520737 +0000 UTC m=+5.227429045,LastTimestamp:2025-12-08 18:51:11.578520737 +0000 UTC m=+5.227429045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.070832 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ee3f92925 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.579777317 +0000 UTC m=+5.228685635,LastTimestamp:2025-12-08 18:51:11.579777317 +0000 UTC m=+5.228685635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.074046 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521eefc99b3c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.777987388 +0000 UTC m=+5.426895696,LastTimestamp:2025-12-08 18:51:11.777987388 +0000 UTC m=+5.426895696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.078686 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f521ef0765a36 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:11.78930847 +0000 UTC m=+5.438216778,LastTimestamp:2025-12-08 18:51:11.78930847 +0000 UTC m=+5.438216778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.089159 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 18:51:26 crc kubenswrapper[5004]: &Event{ObjectMeta:{kube-controller-manager-crc.187f521fe6117b5a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 18:51:26 crc kubenswrapper[5004]: body: Dec 08 18:51:26 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:15.909892954 +0000 UTC m=+9.558801262,LastTimestamp:2025-12-08 18:51:15.909892954 +0000 UTC m=+9.558801262,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:26 crc kubenswrapper[5004]: > Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.093322 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521fe612fd08 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:15.909991688 +0000 UTC m=+9.558899996,LastTimestamp:2025-12-08 18:51:15.909991688 +0000 UTC m=+9.558899996,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.097568 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:51:26 crc kubenswrapper[5004]: &Event{ObjectMeta:{kube-apiserver-crc.187f5220fae573f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 18:51:26 crc kubenswrapper[5004]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 18:51:26 crc kubenswrapper[5004]: Dec 08 18:51:26 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:20.554296313 +0000 UTC m=+14.203204621,LastTimestamp:2025-12-08 18:51:20.554296313 +0000 UTC m=+14.203204621,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:26 crc kubenswrapper[5004]: > Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.100999 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5220fae6dc42 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:20.554388546 +0000 UTC m=+14.203296854,LastTimestamp:2025-12-08 18:51:20.554388546 +0000 UTC m=+14.203296854,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.104602 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5220fae573f9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:51:26 crc kubenswrapper[5004]: &Event{ObjectMeta:{kube-apiserver-crc.187f5220fae573f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 18:51:26 crc kubenswrapper[5004]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 18:51:26 crc kubenswrapper[5004]: Dec 08 18:51:26 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:20.554296313 +0000 UTC m=+14.203204621,LastTimestamp:2025-12-08 18:51:20.565039261 +0000 UTC m=+14.213947569,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:26 crc kubenswrapper[5004]: > Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.109116 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5220fae6dc42\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5220fae6dc42 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:20.554388546 +0000 UTC m=+14.203296854,LastTimestamp:2025-12-08 18:51:20.565162625 +0000 UTC m=+14.214070933,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.113054 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:51:26 crc kubenswrapper[5004]: &Event{ObjectMeta:{kube-apiserver-crc.187f52212103cd9f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Dec 08 18:51:26 crc kubenswrapper[5004]: body: [+]ping ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]log ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]etcd ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-apiextensions-informers ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/crd-informer-synced ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 18:51:26 crc kubenswrapper[5004]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 18:51:26 crc kubenswrapper[5004]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/bootstrap-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/apiservice-registration-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]autoregister-completion ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 18:51:26 crc kubenswrapper[5004]: livez check failed Dec 08 18:51:26 crc kubenswrapper[5004]: Dec 08 18:51:26 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:21.193819551 +0000 UTC m=+14.842727879,LastTimestamp:2025-12-08 18:51:21.193819551 +0000 UTC m=+14.842727879,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:26 crc kubenswrapper[5004]: > Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.117122 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522121055a00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:21.193921024 +0000 UTC m=+14.842829332,LastTimestamp:2025-12-08 18:51:21.193921024 +0000 UTC m=+14.842829332,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.122690 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.187f521fe6117b5a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 18:51:26 crc kubenswrapper[5004]: &Event{ObjectMeta:{kube-controller-manager-crc.187f521fe6117b5a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 18:51:26 crc kubenswrapper[5004]: body: Dec 08 18:51:26 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:15.909892954 +0000 UTC m=+9.558801262,LastTimestamp:2025-12-08 18:51:25.909801929 +0000 UTC m=+19.558710237,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:26 crc kubenswrapper[5004]: > Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.127142 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.187f521fe612fd08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f521fe612fd08 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:15.909991688 +0000 UTC m=+9.558899996,LastTimestamp:2025-12-08 18:51:25.90985724 +0000 UTC m=+19.558765548,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.191701 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.191935 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.192904 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.192928 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.192939 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.193278 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.198133 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.352181 5004 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48410->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.352207 5004 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48412->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.352279 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48410->192.168.126.11:17697: read: connection reset by peer" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.352289 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48412->192.168.126.11:17697: read: connection reset by peer" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.352686 5004 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.352719 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.359442 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:51:26 crc kubenswrapper[5004]: &Event{ObjectMeta:{kube-apiserver-crc.187f5222547b26ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:48410->192.168.126.11:17697: read: connection reset by peer Dec 08 18:51:26 crc kubenswrapper[5004]: body: Dec 08 18:51:26 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:26.352246458 +0000 UTC m=+20.001154766,LastTimestamp:2025-12-08 18:51:26.352246458 +0000 UTC m=+20.001154766,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:26 crc kubenswrapper[5004]: > Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.365231 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:51:26 crc kubenswrapper[5004]: &Event{ObjectMeta:{kube-apiserver-crc.187f5222547b5da6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:48412->192.168.126.11:17697: read: connection reset by peer Dec 08 18:51:26 crc kubenswrapper[5004]: body: Dec 08 18:51:26 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:26.352260518 +0000 UTC m=+20.001168826,LastTimestamp:2025-12-08 18:51:26.352260518 +0000 UTC m=+20.001168826,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:26 crc kubenswrapper[5004]: > Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.369420 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5222547c1a5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48410->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:26.35230883 +0000 UTC m=+20.001217138,LastTimestamp:2025-12-08 18:51:26.35230883 +0000 UTC m=+20.001217138,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.375529 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5222547c3d0e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48412->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:26.35231771 +0000 UTC m=+20.001226018,LastTimestamp:2025-12-08 18:51:26.35231771 +0000 UTC m=+20.001226018,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.381668 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:51:26 crc kubenswrapper[5004]: &Event{ObjectMeta:{kube-apiserver-crc.187f52225482394b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 18:51:26 crc kubenswrapper[5004]: body: Dec 08 18:51:26 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:26.352709963 +0000 UTC m=+20.001618271,LastTimestamp:2025-12-08 18:51:26.352709963 +0000 UTC m=+20.001618271,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:26 crc kubenswrapper[5004]: > Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.387383 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5222548290d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:26.352732373 +0000 UTC m=+20.001640681,LastTimestamp:2025-12-08 18:51:26.352732373 +0000 UTC m=+20.001640681,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.577503 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.846050 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.846278 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.847319 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.847385 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.847403 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.847890 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.867997 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.869999 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d9dd2c4668ac685446c5a036ac7dd8c32ddda20ac9f475d564de7c4ae208fd0d" exitCode=255 Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.870043 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d9dd2c4668ac685446c5a036ac7dd8c32ddda20ac9f475d564de7c4ae208fd0d"} Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.870198 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.870772 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.870813 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.870823 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.871191 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:26 crc kubenswrapper[5004]: I1208 18:51:26.871445 5004 scope.go:117] "RemoveContainer" containerID="d9dd2c4668ac685446c5a036ac7dd8c32ddda20ac9f475d564de7c4ae208fd0d" Dec 08 18:51:26 crc kubenswrapper[5004]: E1208 18:51:26.878030 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f521e8dbd0d67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e8dbd0d67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.132997479 +0000 UTC m=+3.781905787,LastTimestamp:2025-12-08 18:51:26.872920702 +0000 UTC m=+20.521829010,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:27 crc kubenswrapper[5004]: E1208 18:51:27.071218 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:27 crc kubenswrapper[5004]: E1208 18:51:27.299618 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f521e9a230627\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e9a230627 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.341006887 +0000 UTC m=+3.989915205,LastTimestamp:2025-12-08 18:51:27.286105129 +0000 UTC m=+20.935013437,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:27 crc kubenswrapper[5004]: E1208 18:51:27.517716 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f521e9b0d5113\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e9b0d5113 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.356361491 +0000 UTC m=+4.005269799,LastTimestamp:2025-12-08 18:51:27.509471175 +0000 UTC m=+21.158379483,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:27 crc kubenswrapper[5004]: I1208 18:51:27.578434 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:27 crc kubenswrapper[5004]: I1208 18:51:27.875851 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 18:51:27 crc kubenswrapper[5004]: I1208 18:51:27.878304 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a4c2f598325b1c4c2ecead293551313f6a6708afca92ba5f4501e0db63215345"} Dec 08 18:51:27 crc kubenswrapper[5004]: I1208 18:51:27.878555 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:27 crc kubenswrapper[5004]: I1208 18:51:27.879396 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:27 crc kubenswrapper[5004]: I1208 18:51:27.879442 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:27 crc kubenswrapper[5004]: I1208 18:51:27.879457 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:27 crc kubenswrapper[5004]: E1208 18:51:27.879877 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:28 crc kubenswrapper[5004]: I1208 18:51:28.574570 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:28 crc kubenswrapper[5004]: I1208 18:51:28.880379 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:28 crc kubenswrapper[5004]: I1208 18:51:28.880623 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:28 crc kubenswrapper[5004]: I1208 18:51:28.880898 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:28 crc kubenswrapper[5004]: I1208 18:51:28.880925 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:28 crc kubenswrapper[5004]: I1208 18:51:28.880936 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:28 crc kubenswrapper[5004]: E1208 18:51:28.881258 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:29 crc kubenswrapper[5004]: E1208 18:51:29.193999 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.574262 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.663218 5004 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.663286 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-crc" podUID="20c5c5b4bed930554494851fe3cb2b2a" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 18:51:29 crc kubenswrapper[5004]: E1208 18:51:29.668230 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event=< Dec 08 18:51:29 crc kubenswrapper[5004]: &Event{ObjectMeta:{etcd-crc.187f522319d54e93 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Dec 08 18:51:29 crc kubenswrapper[5004]: body: Dec 08 18:51:29 crc kubenswrapper[5004]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:29.663266451 +0000 UTC m=+23.312174759,LastTimestamp:2025-12-08 18:51:29.663266451 +0000 UTC m=+23.312174759,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:51:29 crc kubenswrapper[5004]: > Dec 08 18:51:29 crc kubenswrapper[5004]: E1208 18:51:29.672310 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f522319d5fb0a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:29.663310602 +0000 UTC m=+23.312218910,LastTimestamp:2025-12-08 18:51:29.663310602 +0000 UTC m=+23.312218910,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.790136 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.791185 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.791293 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.791335 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.791390 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:29 crc kubenswrapper[5004]: E1208 18:51:29.801828 5004 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.882787 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.883531 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.883578 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:29 crc kubenswrapper[5004]: I1208 18:51:29.883588 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:29 crc kubenswrapper[5004]: E1208 18:51:29.883935 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:30 crc kubenswrapper[5004]: E1208 18:51:30.383994 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.575661 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.886611 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.887146 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.888907 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a4c2f598325b1c4c2ecead293551313f6a6708afca92ba5f4501e0db63215345" exitCode=255 Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.888961 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"a4c2f598325b1c4c2ecead293551313f6a6708afca92ba5f4501e0db63215345"} Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.889013 5004 scope.go:117] "RemoveContainer" containerID="d9dd2c4668ac685446c5a036ac7dd8c32ddda20ac9f475d564de7c4ae208fd0d" Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.889221 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.890108 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.890144 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.890158 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:30 crc kubenswrapper[5004]: E1208 18:51:30.890762 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:30 crc kubenswrapper[5004]: I1208 18:51:30.891099 5004 scope.go:117] "RemoveContainer" containerID="a4c2f598325b1c4c2ecead293551313f6a6708afca92ba5f4501e0db63215345" Dec 08 18:51:30 crc kubenswrapper[5004]: E1208 18:51:30.891371 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:51:30 crc kubenswrapper[5004]: E1208 18:51:30.899514 5004 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522363082761 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:30.891335521 +0000 UTC m=+24.540243829,LastTimestamp:2025-12-08 18:51:30.891335521 +0000 UTC m=+24.540243829,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:31 crc kubenswrapper[5004]: E1208 18:51:31.543296 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:31 crc kubenswrapper[5004]: I1208 18:51:31.574433 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:31 crc kubenswrapper[5004]: I1208 18:51:31.893057 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.574969 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.914301 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.914523 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.915691 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.915739 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.915753 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:32 crc kubenswrapper[5004]: E1208 18:51:32.916121 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.919386 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.956949 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.957161 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.958139 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.958172 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.958186 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:32 crc kubenswrapper[5004]: E1208 18:51:32.958501 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:32 crc kubenswrapper[5004]: I1208 18:51:32.958792 5004 scope.go:117] "RemoveContainer" containerID="a4c2f598325b1c4c2ecead293551313f6a6708afca92ba5f4501e0db63215345" Dec 08 18:51:32 crc kubenswrapper[5004]: E1208 18:51:32.959014 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:51:32 crc kubenswrapper[5004]: E1208 18:51:32.963861 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522363082761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522363082761 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:30.891335521 +0000 UTC m=+24.540243829,LastTimestamp:2025-12-08 18:51:32.958987716 +0000 UTC m=+26.607896024,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:33 crc kubenswrapper[5004]: I1208 18:51:33.577907 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:33 crc kubenswrapper[5004]: I1208 18:51:33.899746 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:33 crc kubenswrapper[5004]: I1208 18:51:33.900478 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:33 crc kubenswrapper[5004]: I1208 18:51:33.900516 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:33 crc kubenswrapper[5004]: I1208 18:51:33.900527 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:33 crc kubenswrapper[5004]: E1208 18:51:33.900864 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:34 crc kubenswrapper[5004]: E1208 18:51:34.479562 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:34 crc kubenswrapper[5004]: I1208 18:51:34.575181 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:34 crc kubenswrapper[5004]: E1208 18:51:34.963919 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:35 crc kubenswrapper[5004]: I1208 18:51:35.575861 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:36 crc kubenswrapper[5004]: E1208 18:51:36.199648 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:51:36 crc kubenswrapper[5004]: I1208 18:51:36.575485 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:36 crc kubenswrapper[5004]: I1208 18:51:36.802103 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:36 crc kubenswrapper[5004]: I1208 18:51:36.803272 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:36 crc kubenswrapper[5004]: I1208 18:51:36.803311 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:36 crc kubenswrapper[5004]: I1208 18:51:36.803322 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:36 crc kubenswrapper[5004]: I1208 18:51:36.803345 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:36 crc kubenswrapper[5004]: E1208 18:51:36.810510 5004 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:51:37 crc kubenswrapper[5004]: E1208 18:51:37.072510 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:37 crc kubenswrapper[5004]: I1208 18:51:37.576689 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.574230 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.677567 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.677868 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.678702 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.678750 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.678766 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:38 crc kubenswrapper[5004]: E1208 18:51:38.679308 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.693483 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.911743 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.912433 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.912632 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:38 crc kubenswrapper[5004]: I1208 18:51:38.912804 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:38 crc kubenswrapper[5004]: E1208 18:51:38.913808 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:39 crc kubenswrapper[5004]: I1208 18:51:39.574508 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:40 crc kubenswrapper[5004]: I1208 18:51:40.575726 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:41 crc kubenswrapper[5004]: E1208 18:51:41.117922 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:41 crc kubenswrapper[5004]: I1208 18:51:41.578015 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:42 crc kubenswrapper[5004]: I1208 18:51:42.572910 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:42 crc kubenswrapper[5004]: E1208 18:51:42.817591 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:43 crc kubenswrapper[5004]: E1208 18:51:43.205006 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:51:43 crc kubenswrapper[5004]: I1208 18:51:43.575502 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:43 crc kubenswrapper[5004]: I1208 18:51:43.810650 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:43 crc kubenswrapper[5004]: I1208 18:51:43.812172 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:43 crc kubenswrapper[5004]: I1208 18:51:43.812239 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:43 crc kubenswrapper[5004]: I1208 18:51:43.812256 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:43 crc kubenswrapper[5004]: I1208 18:51:43.812299 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:43 crc kubenswrapper[5004]: E1208 18:51:43.825374 5004 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:51:44 crc kubenswrapper[5004]: I1208 18:51:44.575737 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:45 crc kubenswrapper[5004]: I1208 18:51:45.575537 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:46 crc kubenswrapper[5004]: I1208 18:51:46.574311 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:47 crc kubenswrapper[5004]: E1208 18:51:47.073485 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.579415 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.710362 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.711340 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.711529 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.711685 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:47 crc kubenswrapper[5004]: E1208 18:51:47.712243 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.712628 5004 scope.go:117] "RemoveContainer" containerID="a4c2f598325b1c4c2ecead293551313f6a6708afca92ba5f4501e0db63215345" Dec 08 18:51:47 crc kubenswrapper[5004]: E1208 18:51:47.719944 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f521e8dbd0d67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e8dbd0d67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.132997479 +0000 UTC m=+3.781905787,LastTimestamp:2025-12-08 18:51:47.714047087 +0000 UTC m=+41.362955395,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:47 crc kubenswrapper[5004]: E1208 18:51:47.881095 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f521e9a230627\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e9a230627 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.341006887 +0000 UTC m=+3.989915205,LastTimestamp:2025-12-08 18:51:47.876543647 +0000 UTC m=+41.525451955,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:47 crc kubenswrapper[5004]: E1208 18:51:47.891277 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f521e9b0d5113\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f521e9b0d5113 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:10.356361491 +0000 UTC m=+4.005269799,LastTimestamp:2025-12-08 18:51:47.886118289 +0000 UTC m=+41.535026597,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.935021 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.936838 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d511778f5d6dc952e8c2b78235fd45c673f04437651b6128a0f7f49291aa1801"} Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.937125 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.937699 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.937737 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:47 crc kubenswrapper[5004]: I1208 18:51:47.937749 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:47 crc kubenswrapper[5004]: E1208 18:51:47.938005 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:48 crc kubenswrapper[5004]: I1208 18:51:48.575646 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.574519 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.942602 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.943062 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.944546 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d511778f5d6dc952e8c2b78235fd45c673f04437651b6128a0f7f49291aa1801" exitCode=255 Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.944599 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d511778f5d6dc952e8c2b78235fd45c673f04437651b6128a0f7f49291aa1801"} Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.944680 5004 scope.go:117] "RemoveContainer" containerID="a4c2f598325b1c4c2ecead293551313f6a6708afca92ba5f4501e0db63215345" Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.944959 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.945723 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.945766 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.945899 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:49 crc kubenswrapper[5004]: E1208 18:51:49.946404 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:49 crc kubenswrapper[5004]: I1208 18:51:49.946727 5004 scope.go:117] "RemoveContainer" containerID="d511778f5d6dc952e8c2b78235fd45c673f04437651b6128a0f7f49291aa1801" Dec 08 18:51:49 crc kubenswrapper[5004]: E1208 18:51:49.946915 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:51:49 crc kubenswrapper[5004]: E1208 18:51:49.954525 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522363082761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522363082761 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:30.891335521 +0000 UTC m=+24.540243829,LastTimestamp:2025-12-08 18:51:49.946887723 +0000 UTC m=+43.595796031,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:50 crc kubenswrapper[5004]: E1208 18:51:50.210790 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:51:50 crc kubenswrapper[5004]: I1208 18:51:50.580851 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:50 crc kubenswrapper[5004]: I1208 18:51:50.826199 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:50 crc kubenswrapper[5004]: I1208 18:51:50.827322 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:50 crc kubenswrapper[5004]: I1208 18:51:50.827351 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:50 crc kubenswrapper[5004]: I1208 18:51:50.827361 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:50 crc kubenswrapper[5004]: I1208 18:51:50.827384 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:50 crc kubenswrapper[5004]: E1208 18:51:50.836763 5004 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:51:50 crc kubenswrapper[5004]: I1208 18:51:50.948252 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 18:51:51 crc kubenswrapper[5004]: I1208 18:51:51.574414 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:52 crc kubenswrapper[5004]: I1208 18:51:52.575213 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:52 crc kubenswrapper[5004]: I1208 18:51:52.957713 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:52 crc kubenswrapper[5004]: I1208 18:51:52.958000 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:52 crc kubenswrapper[5004]: I1208 18:51:52.958611 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:52 crc kubenswrapper[5004]: I1208 18:51:52.958744 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:52 crc kubenswrapper[5004]: I1208 18:51:52.958820 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:52 crc kubenswrapper[5004]: E1208 18:51:52.959270 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:52 crc kubenswrapper[5004]: I1208 18:51:52.959667 5004 scope.go:117] "RemoveContainer" containerID="d511778f5d6dc952e8c2b78235fd45c673f04437651b6128a0f7f49291aa1801" Dec 08 18:51:52 crc kubenswrapper[5004]: E1208 18:51:52.959992 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:51:52 crc kubenswrapper[5004]: E1208 18:51:52.964238 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522363082761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522363082761 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:30.891335521 +0000 UTC m=+24.540243829,LastTimestamp:2025-12-08 18:51:52.95995105 +0000 UTC m=+46.608859358,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:53 crc kubenswrapper[5004]: I1208 18:51:53.575165 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:54 crc kubenswrapper[5004]: E1208 18:51:54.344458 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:54 crc kubenswrapper[5004]: I1208 18:51:54.575059 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:54 crc kubenswrapper[5004]: E1208 18:51:54.903161 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:55 crc kubenswrapper[5004]: I1208 18:51:55.576266 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:55 crc kubenswrapper[5004]: E1208 18:51:55.744254 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:56 crc kubenswrapper[5004]: I1208 18:51:56.575451 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:57 crc kubenswrapper[5004]: E1208 18:51:57.075183 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:57 crc kubenswrapper[5004]: E1208 18:51:57.216199 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.575746 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.837911 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.838858 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.838895 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.838907 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.838927 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:57 crc kubenswrapper[5004]: E1208 18:51:57.849456 5004 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.938162 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.938526 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.939531 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.939578 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.939591 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:57 crc kubenswrapper[5004]: E1208 18:51:57.939969 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:57 crc kubenswrapper[5004]: I1208 18:51:57.940231 5004 scope.go:117] "RemoveContainer" containerID="d511778f5d6dc952e8c2b78235fd45c673f04437651b6128a0f7f49291aa1801" Dec 08 18:51:57 crc kubenswrapper[5004]: E1208 18:51:57.940405 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:51:57 crc kubenswrapper[5004]: E1208 18:51:57.945811 5004 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522363082761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522363082761 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:30.891335521 +0000 UTC m=+24.540243829,LastTimestamp:2025-12-08 18:51:57.940373768 +0000 UTC m=+51.589282076,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:58 crc kubenswrapper[5004]: I1208 18:51:58.574582 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:51:59 crc kubenswrapper[5004]: I1208 18:51:59.575248 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:00 crc kubenswrapper[5004]: I1208 18:52:00.574799 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:00 crc kubenswrapper[5004]: I1208 18:52:00.827689 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:52:00 crc kubenswrapper[5004]: I1208 18:52:00.827896 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:00 crc kubenswrapper[5004]: I1208 18:52:00.828783 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:00 crc kubenswrapper[5004]: I1208 18:52:00.828827 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:00 crc kubenswrapper[5004]: I1208 18:52:00.828837 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:00 crc kubenswrapper[5004]: E1208 18:52:00.829205 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:00 crc kubenswrapper[5004]: E1208 18:52:00.944841 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:52:01 crc kubenswrapper[5004]: I1208 18:52:01.575558 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:02 crc kubenswrapper[5004]: I1208 18:52:02.576109 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:03 crc kubenswrapper[5004]: I1208 18:52:03.575720 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:04 crc kubenswrapper[5004]: E1208 18:52:04.222526 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:04 crc kubenswrapper[5004]: I1208 18:52:04.574788 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:04 crc kubenswrapper[5004]: I1208 18:52:04.849637 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:04 crc kubenswrapper[5004]: I1208 18:52:04.850614 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:04 crc kubenswrapper[5004]: I1208 18:52:04.850652 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:04 crc kubenswrapper[5004]: I1208 18:52:04.850664 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:04 crc kubenswrapper[5004]: I1208 18:52:04.850687 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:04 crc kubenswrapper[5004]: E1208 18:52:04.858059 5004 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:52:05 crc kubenswrapper[5004]: I1208 18:52:05.575610 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:06 crc kubenswrapper[5004]: I1208 18:52:06.576158 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:07 crc kubenswrapper[5004]: E1208 18:52:07.075570 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:07 crc kubenswrapper[5004]: I1208 18:52:07.575040 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:08 crc kubenswrapper[5004]: I1208 18:52:08.574920 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:09 crc kubenswrapper[5004]: I1208 18:52:09.575522 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:10 crc kubenswrapper[5004]: I1208 18:52:10.577053 5004 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.228287 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.365519 5004 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-7wmsv" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.372828 5004 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-7wmsv" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.424499 5004 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.505736 5004 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.709298 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.710914 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.710955 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.710966 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.711410 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.711634 5004 scope.go:117] "RemoveContainer" containerID="d511778f5d6dc952e8c2b78235fd45c673f04437651b6128a0f7f49291aa1801" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.858341 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.859347 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.859397 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.859410 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.859518 5004 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.870288 5004 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.870502 5004 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.870520 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.876546 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.876831 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.877048 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.877198 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.877295 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:11Z","lastTransitionTime":"2025-12-08T18:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.891228 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.899165 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.899206 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.899216 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.899233 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.899244 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:11Z","lastTransitionTime":"2025-12-08T18:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.908527 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.915914 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.915953 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.915964 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.915978 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.915989 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:11Z","lastTransitionTime":"2025-12-08T18:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.925204 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.933572 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.933846 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.933958 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.934054 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:11 crc kubenswrapper[5004]: I1208 18:52:11.934164 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:11Z","lastTransitionTime":"2025-12-08T18:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.943563 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.944019 5004 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:52:11 crc kubenswrapper[5004]: E1208 18:52:11.944120 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: I1208 18:52:12.014163 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 18:52:12 crc kubenswrapper[5004]: I1208 18:52:12.015587 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209"} Dec 08 18:52:12 crc kubenswrapper[5004]: I1208 18:52:12.015809 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:12 crc kubenswrapper[5004]: I1208 18:52:12.016411 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:12 crc kubenswrapper[5004]: I1208 18:52:12.016457 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:12 crc kubenswrapper[5004]: I1208 18:52:12.016470 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.016956 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.044407 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.145686 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.246670 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.347478 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: I1208 18:52:12.373883 5004 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 18:47:11 +0000 UTC" deadline="2025-12-31 07:36:48.82002537 +0000 UTC" Dec 08 18:52:12 crc kubenswrapper[5004]: I1208 18:52:12.374197 5004 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="540h44m36.445834575s" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.447708 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.548866 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.649445 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.750338 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.851653 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:12 crc kubenswrapper[5004]: E1208 18:52:12.952823 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.019562 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.019907 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.021100 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209" exitCode=255 Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.021159 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209"} Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.021205 5004 scope.go:117] "RemoveContainer" containerID="d511778f5d6dc952e8c2b78235fd45c673f04437651b6128a0f7f49291aa1801" Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.021418 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.022059 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.022113 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.022127 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.022456 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:13 crc kubenswrapper[5004]: I1208 18:52:13.022666 5004 scope.go:117] "RemoveContainer" containerID="43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.022845 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.053516 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.154128 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.255093 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.356153 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.456983 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.558114 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.658547 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.758895 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.860042 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:13 crc kubenswrapper[5004]: E1208 18:52:13.961234 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: I1208 18:52:14.025921 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.062311 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.163163 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.264009 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.364165 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.465101 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.565347 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.665806 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.766510 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.867174 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:14 crc kubenswrapper[5004]: E1208 18:52:14.968327 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.069350 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.170482 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.271442 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.372234 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.473318 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.573755 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.674246 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.775261 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.876201 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:15 crc kubenswrapper[5004]: E1208 18:52:15.976925 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.077669 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.178591 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.279290 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.380239 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.481057 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.581764 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.682361 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.783443 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.884221 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:16 crc kubenswrapper[5004]: E1208 18:52:16.985054 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.076712 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.085155 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.185738 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.285832 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.386689 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.487657 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.588780 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.689598 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.789942 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.891130 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:17 crc kubenswrapper[5004]: E1208 18:52:17.991979 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.092709 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.193796 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.294327 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.395343 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.495960 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.614481 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.715105 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.815520 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[5004]: E1208 18:52:18.916097 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.017169 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.117687 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.218583 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.319532 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.420299 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.520651 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.621312 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.722110 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.823232 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:19 crc kubenswrapper[5004]: E1208 18:52:19.924244 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.024843 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.125598 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.226313 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.327350 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.428024 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.528515 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.628895 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.729191 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.829800 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:20 crc kubenswrapper[5004]: E1208 18:52:20.930396 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.031124 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.131765 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.231924 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.332265 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.432581 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.533200 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.634203 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.735409 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.836541 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:21 crc kubenswrapper[5004]: E1208 18:52:21.937699 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.016548 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.016864 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.017865 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.017909 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.017921 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.018347 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.018587 5004 scope.go:117] "RemoveContainer" containerID="43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.018761 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.028737 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.032487 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.032587 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.032603 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.032620 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.032631 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:22Z","lastTransitionTime":"2025-12-08T18:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.043297 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.052401 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.052448 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.052460 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.052480 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.052493 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:22Z","lastTransitionTime":"2025-12-08T18:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.062226 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.069675 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.069732 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.069746 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.069762 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.069775 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:22Z","lastTransitionTime":"2025-12-08T18:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.079408 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.086248 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.086302 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.086316 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.086337 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.086350 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:22Z","lastTransitionTime":"2025-12-08T18:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.097391 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.097572 5004 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.097597 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.198514 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.299385 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.400208 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.501125 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.601915 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.702924 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.803093 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.904207 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.957575 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.957863 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.958763 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.958815 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.958834 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.959570 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:22 crc kubenswrapper[5004]: I1208 18:52:22.959855 5004 scope.go:117] "RemoveContainer" containerID="43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209" Dec 08 18:52:22 crc kubenswrapper[5004]: E1208 18:52:22.960089 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.004828 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.106223 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.206557 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.306745 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.407655 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.508637 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.609177 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.709597 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.810714 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:23 crc kubenswrapper[5004]: E1208 18:52:23.910880 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.011892 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.112719 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.213398 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.314312 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.415188 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.516233 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.616537 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.716662 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.817143 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:24 crc kubenswrapper[5004]: E1208 18:52:24.917685 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.018836 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.119672 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.220553 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.321742 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.422423 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.523447 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.624525 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: I1208 18:52:25.709776 5004 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:25 crc kubenswrapper[5004]: I1208 18:52:25.710806 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:25 crc kubenswrapper[5004]: I1208 18:52:25.710846 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:25 crc kubenswrapper[5004]: I1208 18:52:25.710858 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.711164 5004 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.725325 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.826222 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:25 crc kubenswrapper[5004]: E1208 18:52:25.926376 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.027288 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.127479 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: I1208 18:52:26.140177 5004 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.228365 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.329213 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.429941 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.530434 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.630940 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.731407 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.832337 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:26 crc kubenswrapper[5004]: E1208 18:52:26.933311 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.034451 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.077266 5004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.135508 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.236429 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.337183 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.437922 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.538146 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.639006 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.739121 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.839473 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[5004]: E1208 18:52:27.939669 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.039970 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.140192 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.241005 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.341598 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.442230 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.542740 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.643723 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.744513 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.845005 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:28 crc kubenswrapper[5004]: E1208 18:52:28.945626 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:29 crc kubenswrapper[5004]: E1208 18:52:29.046927 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:29 crc kubenswrapper[5004]: E1208 18:52:29.148008 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:29 crc kubenswrapper[5004]: E1208 18:52:29.248910 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:29 crc kubenswrapper[5004]: E1208 18:52:29.349193 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:29 crc kubenswrapper[5004]: E1208 18:52:29.450109 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:29 crc kubenswrapper[5004]: E1208 18:52:29.550515 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:29 crc kubenswrapper[5004]: E1208 18:52:29.651323 5004 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.704068 5004 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.753530 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.753578 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.753591 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.753608 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.753621 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:29Z","lastTransitionTime":"2025-12-08T18:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.780793 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.790432 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.855984 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.856293 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.856565 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.857123 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.857309 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:29Z","lastTransitionTime":"2025-12-08T18:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.885323 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.895846 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.965909 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.965961 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.965971 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.965988 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.965997 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:29Z","lastTransitionTime":"2025-12-08T18:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:29 crc kubenswrapper[5004]: I1208 18:52:29.997141 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.068526 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.068878 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.069031 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.069234 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.069352 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.172182 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.172228 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.172238 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.172252 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.172263 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.274556 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.274613 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.274623 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.274641 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.274662 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.376636 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.376702 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.376715 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.376737 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.376750 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.479364 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.479432 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.479462 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.479479 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.479488 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.581837 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.582408 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.582486 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.582561 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.582623 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.619975 5004 apiserver.go:52] "Watching apiserver" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.626919 5004 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.627633 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-multus/multus-additional-cni-plugins-q4dd6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-xnzfz","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z","openshift-ovn-kubernetes/ovnkube-node-dmsk4","openshift-dns/node-resolver-7cqb6","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-qxdkt","openshift-network-node-identity/network-node-identity-dgvkt","openshift-etcd/etcd-crc","openshift-image-registry/node-ca-67htd","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/network-metrics-daemon-7wmb8"] Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.628820 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.630589 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.630887 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.631302 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.631357 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.631485 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.635315 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.635384 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.636219 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.637638 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.638236 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.638860 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.638941 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.640171 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.640491 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.640632 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.653492 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.663798 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.665937 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.666792 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.668908 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.671166 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.671278 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.671420 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.671425 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.673206 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.673517 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.673867 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.676159 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.676231 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.676262 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.676500 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.676531 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.676606 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.677883 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.676986 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.677036 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.678054 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.677102 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.676782 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.677171 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.683003 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.683043 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.683383 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.684664 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.684757 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.688542 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.688569 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.688655 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.688842 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.688907 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.689150 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.691186 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.691466 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.691851 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.691880 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.691889 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.691902 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.691913 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.698550 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.700063 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.700389 5004 scope.go:117] "RemoveContainer" containerID="43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.700703 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.702574 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.702904 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.703037 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.703356 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.728998 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.743919 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761393 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-cnibin\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761459 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkwxr\" (UniqueName: \"kubernetes.io/projected/89b69152-f317-4e7b-9215-fc6c71abc31f-kube-api-access-mkwxr\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761482 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ntv\" (UniqueName: \"kubernetes.io/projected/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-kube-api-access-d6ntv\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761501 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-socket-dir-parent\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761520 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-hostroot\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761543 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-etc-kubernetes\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761573 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761598 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761621 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cni-binary-copy\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761647 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761667 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6w87\" (UniqueName: \"kubernetes.io/projected/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-kube-api-access-n6w87\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761688 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f740204d-ae80-410c-85a7-d7e935eed5d0-hosts-file\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761707 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761769 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-ovn-kubernetes\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.761803 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-config\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762004 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovn-node-metrics-cert\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762029 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762044 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762059 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-slash\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762090 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762108 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762125 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762142 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-systemd-units\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762158 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762174 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-cni-bin\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762252 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762268 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-systemd\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762281 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-netd\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762295 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-conf-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762313 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlsz\" (UniqueName: \"kubernetes.io/projected/5db7afc3-55ae-4aa9-9946-c263aeffae20-kube-api-access-bqlsz\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762338 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cnibin\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762625 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-ovn\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762646 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762666 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-cni-binary-copy\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762688 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762705 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8g2b\" (UniqueName: \"kubernetes.io/projected/f740204d-ae80-410c-85a7-d7e935eed5d0-kube-api-access-h8g2b\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762719 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-var-lib-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762732 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-system-cni-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762749 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-os-release\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762762 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-cni-multus\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762777 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.762986 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763001 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763019 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f740204d-ae80-410c-85a7-d7e935eed5d0-tmp-dir\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763033 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpqt\" (UniqueName: \"kubernetes.io/projected/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-kube-api-access-blpqt\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763138 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-env-overrides\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763158 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763174 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-multus-certs\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763190 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-netns\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763205 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-node-log\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763220 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763234 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-daemon-config\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763248 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763264 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5db7afc3-55ae-4aa9-9946-c263aeffae20-rootfs\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763280 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-k8s-cni-cncf-io\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763297 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-kubelet\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763314 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5db7afc3-55ae-4aa9-9946-c263aeffae20-proxy-tls\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763328 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-system-cni-dir\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763341 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-log-socket\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763355 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763369 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-script-lib\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763384 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5db7afc3-55ae-4aa9-9946-c263aeffae20-mcd-auth-proxy-config\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.763400 5004 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.765086 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.765863 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.765877 5004 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.763399 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-os-release\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.766263 5004 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.766391 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-netns\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.766511 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:31.266481232 +0000 UTC m=+84.915389570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.766654 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:31.266632887 +0000 UTC m=+84.915541195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.766703 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-kubelet\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.766727 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-etc-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.766742 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-bin\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.766773 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-cni-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.767869 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.778354 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.782565 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.782910 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.783214 5004 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.783582 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:31.28355485 +0000 UTC m=+84.932463188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.782933 5004 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.784155 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.784180 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.784190 5004 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.784253 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:31.284235442 +0000 UTC m=+84.933143750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.789546 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.797300 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.797749 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.798142 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.798304 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.797816 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.798774 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.798884 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.805360 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.805514 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.813249 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.826387 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.832025 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.845728 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.853991 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.862141 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868012 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868061 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868101 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868125 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868151 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868178 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868206 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868227 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868249 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868270 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868290 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868314 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868335 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868363 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868389 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868411 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868433 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868458 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868479 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868503 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868523 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868547 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868570 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868593 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868613 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868635 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868661 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868685 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868710 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868731 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868755 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868778 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868800 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868827 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868849 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868873 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868894 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868918 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868941 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.868963 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869001 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869024 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869047 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869087 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869110 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869133 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869157 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869180 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869204 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869230 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869254 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869276 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869300 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869322 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869344 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869369 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869394 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869418 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869442 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869466 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869490 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869519 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869544 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869571 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869609 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869637 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869660 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869681 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869702 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869724 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869745 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869765 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869788 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869811 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869916 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869944 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869971 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.869999 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870027 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870054 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870103 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870131 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870152 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870173 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870191 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870208 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870226 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870243 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870260 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870276 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870292 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870308 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870324 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870341 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870359 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870374 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870390 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870408 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870426 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870442 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870457 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870473 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870489 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870505 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870524 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870541 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870557 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870573 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870589 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870606 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870624 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870641 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870659 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870677 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870693 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870709 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870728 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870746 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870762 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870782 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870800 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870817 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870834 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870851 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870868 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870889 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870906 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870925 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870942 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.870960 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871012 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871030 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871052 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871023 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871092 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871110 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871127 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871143 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871160 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871178 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871194 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871212 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871230 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871246 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871581 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871602 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871696 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871728 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871762 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871791 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871817 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871842 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871869 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871903 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871935 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871964 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.871991 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872022 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872046 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872091 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872120 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872137 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872148 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872154 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872199 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872226 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872252 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872280 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872308 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872333 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872355 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872366 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872391 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872409 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872416 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872461 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872504 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872538 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872566 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872593 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872627 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872656 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872668 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872685 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872733 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872762 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872790 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872816 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872844 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872870 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872881 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872899 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872969 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.872996 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873016 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873034 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873054 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873086 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873105 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873130 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873150 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873172 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873229 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873251 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873271 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873290 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873304 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873308 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873340 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873343 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873381 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873404 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873430 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873457 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873531 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873560 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873556 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873595 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873628 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873662 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873689 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873711 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873719 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.874099 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.874325 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.874926 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.875239 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.875251 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.875287 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.875530 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.875648 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.875859 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.875979 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.876325 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.876441 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.876550 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.876623 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.876839 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.876867 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.876881 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.876919 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877087 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877129 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877256 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877292 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877539 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877547 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877664 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877673 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877887 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877961 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877971 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.877977 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878022 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878226 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878241 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878423 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.878534 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:31.378448147 +0000 UTC m=+85.027356455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878559 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878690 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878708 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878857 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.878963 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.879059 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.879256 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.879421 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.879638 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.879620 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.879904 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880173 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880236 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880141 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880305 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880409 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880457 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880568 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.874414 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.874471 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.874519 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880576 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880777 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.880831 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.881016 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.881037 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.881206 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.881465 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.881497 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.881582 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.885315 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.885417 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.885639 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.885826 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.887006 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.887161 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.887214 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.887320 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.887359 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.887456 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.887924 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.888162 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.888260 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.888378 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.888416 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.888577 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.888986 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.889098 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.889128 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.889284 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.889330 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.889461 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.889475 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.889641 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.890372 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.890386 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.890611 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.890734 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.890738 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891057 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891104 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891307 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891350 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891369 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891548 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891620 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891628 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891909 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.891997 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.892133 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.892181 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.892420 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.892447 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.892463 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.892483 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.892776 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.892978 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893518 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.873732 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893807 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893831 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893850 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893870 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893886 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893817 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893907 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894057 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894090 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894117 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894134 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894152 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894208 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f740204d-ae80-410c-85a7-d7e935eed5d0-hosts-file\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894225 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894241 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-ovn-kubernetes\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894259 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-config\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894276 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovn-node-metrics-cert\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894297 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894316 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4b57acd8-c7ba-499a-8742-2a6fb585c7de-serviceca\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894349 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894368 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-slash\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894385 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894403 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xfns\" (UniqueName: \"kubernetes.io/projected/4b57acd8-c7ba-499a-8742-2a6fb585c7de-kube-api-access-5xfns\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894431 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894450 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-systemd-units\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894473 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-cni-bin\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894536 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-systemd\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894557 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-netd\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894583 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-conf-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894603 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bqlsz\" (UniqueName: \"kubernetes.io/projected/5db7afc3-55ae-4aa9-9946-c263aeffae20-kube-api-access-bqlsz\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894620 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cnibin\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894636 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-ovn\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894655 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-cni-binary-copy\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894688 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8g2b\" (UniqueName: \"kubernetes.io/projected/f740204d-ae80-410c-85a7-d7e935eed5d0-kube-api-access-h8g2b\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894707 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-var-lib-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894724 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-system-cni-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894741 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-os-release\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894774 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-cni-multus\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894807 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894826 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894843 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f740204d-ae80-410c-85a7-d7e935eed5d0-tmp-dir\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895116 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-blpqt\" (UniqueName: \"kubernetes.io/projected/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-kube-api-access-blpqt\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895140 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-env-overrides\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895241 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-multus-certs\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895291 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b57acd8-c7ba-499a-8742-2a6fb585c7de-host\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895315 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-netns\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895366 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-node-log\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895388 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-daemon-config\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895407 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895470 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895487 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5db7afc3-55ae-4aa9-9946-c263aeffae20-rootfs\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895527 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-k8s-cni-cncf-io\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895546 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-kubelet\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895578 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5db7afc3-55ae-4aa9-9946-c263aeffae20-proxy-tls\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895598 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-system-cni-dir\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895615 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-log-socket\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895634 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895663 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-script-lib\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895687 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5db7afc3-55ae-4aa9-9946-c263aeffae20-mcd-auth-proxy-config\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895708 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-os-release\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895726 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-netns\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895748 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l8m8\" (UniqueName: \"kubernetes.io/projected/02dfac61-6fa6-441d-83f2-c2f275a144e8-kube-api-access-8l8m8\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895768 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-kubelet\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895788 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-etc-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895804 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-bin\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895821 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-cni-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895839 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-cnibin\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895860 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mkwxr\" (UniqueName: \"kubernetes.io/projected/89b69152-f317-4e7b-9215-fc6c71abc31f-kube-api-access-mkwxr\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895877 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d6ntv\" (UniqueName: \"kubernetes.io/projected/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-kube-api-access-d6ntv\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895894 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-socket-dir-parent\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895911 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-hostroot\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895935 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-etc-kubernetes\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895979 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cni-binary-copy\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895997 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896016 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n6w87\" (UniqueName: \"kubernetes.io/projected/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-kube-api-access-n6w87\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896307 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896321 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896331 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896341 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896350 5004 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896359 5004 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896371 5004 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896384 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896395 5004 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896539 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896558 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896577 5004 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896591 5004 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896605 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896615 5004 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896624 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896633 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896644 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896699 5004 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896714 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896727 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896770 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896788 5004 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896802 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896815 5004 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896828 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896841 5004 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896854 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896866 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896879 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896892 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896906 5004 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896920 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896933 5004 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896946 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896958 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896972 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896985 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896999 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897011 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897024 5004 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897037 5004 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897050 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897081 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897096 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897110 5004 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897119 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897127 5004 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897136 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897147 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897156 5004 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897165 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897174 5004 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897182 5004 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897191 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897201 5004 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897210 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897219 5004 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897228 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897238 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897247 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897256 5004 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897267 5004 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897275 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897287 5004 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897297 5004 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897306 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897316 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897326 5004 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897335 5004 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897345 5004 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897354 5004 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897363 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897373 5004 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897381 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897392 5004 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897402 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897413 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897423 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897432 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897442 5004 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897451 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897460 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897470 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897479 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897489 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897499 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897510 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897518 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897527 5004 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897536 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897546 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897559 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897572 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897582 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897591 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897600 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897612 5004 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897620 5004 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897630 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897640 5004 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897649 5004 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897659 5004 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897669 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897678 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897687 5004 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897696 5004 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897705 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897715 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897726 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897736 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897745 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897754 5004 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897762 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897772 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897781 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897789 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897798 5004 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897808 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897818 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897827 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897836 5004 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.899836 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-multus-certs\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.893827 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894061 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.874410 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894431 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894887 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.894986 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895374 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.895886 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896015 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896023 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896555 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896768 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896902 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.896985 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897327 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897347 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897633 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900556 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-systemd-units\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900600 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900629 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-cni-bin\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900658 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-systemd\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900690 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-netd\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900717 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-conf-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900755 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-ovn-kubernetes\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897651 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897672 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897768 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.897807 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.898300 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900897 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-netns\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.906041 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.906267 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.906335 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.907000 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.907794 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.907823 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.907834 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.908179 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.908393 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.908432 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.910901 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-env-overrides\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.911099 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.913055 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.913188 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.913844 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.918109 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f740204d-ae80-410c-85a7-d7e935eed5d0-tmp-dir\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.918187 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.918482 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919274 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919404 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919441 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919816 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919870 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919904 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919916 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919934 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.919950 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:30Z","lastTransitionTime":"2025-12-08T18:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.920135 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.920280 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-config\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.920238 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.920346 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.920746 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.920764 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.920803 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.921083 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.921178 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.921233 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.921714 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.922053 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.922291 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.922487 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.922883 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.922916 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.923396 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.923707 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.923746 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5db7afc3-55ae-4aa9-9946-c263aeffae20-mcd-auth-proxy-config\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.923758 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-system-cni-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.923761 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.923846 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-os-release\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.923914 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-os-release\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.923956 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-hostroot\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.924174 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-cni-dir\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.924267 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-cnibin\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.924368 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.900149 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-slash\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.924865 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.924873 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.925525 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.925594 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.926243 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.926288 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.926744 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.926836 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.926937 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.926198 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-cni-binary-copy\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.927354 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.927450 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.927521 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.927694 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.927872 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.928150 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.928267 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.928471 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.928635 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.928364 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-cni-multus\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.929123 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.929570 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.930806 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.930980 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.931017 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.931123 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.931148 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.931504 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.932034 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.932331 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.932359 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-blpqt\" (UniqueName: \"kubernetes.io/projected/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-kube-api-access-blpqt\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.932538 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.932792 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.932923 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6w87\" (UniqueName: \"kubernetes.io/projected/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-kube-api-access-n6w87\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933133 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933339 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933569 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-var-lib-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933621 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cnibin\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933621 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-system-cni-dir\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933672 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933680 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933742 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5db7afc3-55ae-4aa9-9946-c263aeffae20-rootfs\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933778 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-etc-kubernetes\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933827 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-socket-dir-parent\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.933899 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-k8s-cni-cncf-io\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.935541 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-var-lib-kubelet\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.935630 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-host-run-netns\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.935783 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.935858 5004 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.935914 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs podName:89b69152-f317-4e7b-9215-fc6c71abc31f nodeName:}" failed. No retries permitted until 2025-12-08 18:52:31.435898512 +0000 UTC m=+85.084806820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs") pod "network-metrics-daemon-7wmb8" (UID: "89b69152-f317-4e7b-9215-fc6c71abc31f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936115 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-kubelet\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936275 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-bin\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936333 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-etc-openvswitch\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936471 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-node-log\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936609 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-log-socket\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936736 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f740204d-ae80-410c-85a7-d7e935eed5d0-hosts-file\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936795 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-ovn\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936905 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.936920 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e00ae10b-1af7-4d7e-aad6-135dac0d2aa5-multus-daemon-config\") pod \"multus-qxdkt\" (UID: \"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\") " pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.937617 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.945028 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-script-lib\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.945414 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cni-binary-copy\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.946268 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.946442 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.946507 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqlsz\" (UniqueName: \"kubernetes.io/projected/5db7afc3-55ae-4aa9-9946-c263aeffae20-kube-api-access-bqlsz\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.946970 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5285d47c-a794-4eb8-a948-e1f8a9e64ec8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q4dd6\" (UID: \"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\") " pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.947108 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.947809 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkwxr\" (UniqueName: \"kubernetes.io/projected/89b69152-f317-4e7b-9215-fc6c71abc31f-kube-api-access-mkwxr\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.948465 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.948869 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovn-node-metrics-cert\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.949186 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.951518 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5db7afc3-55ae-4aa9-9946-c263aeffae20-proxy-tls\") pod \"machine-config-daemon-xnzfz\" (UID: \"5db7afc3-55ae-4aa9-9946-c263aeffae20\") " pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.954652 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.954838 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.958287 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.959108 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.961862 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6ntv\" (UniqueName: \"kubernetes.io/projected/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-kube-api-access-d6ntv\") pod \"ovnkube-node-dmsk4\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.964543 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8g2b\" (UniqueName: \"kubernetes.io/projected/f740204d-ae80-410c-85a7-d7e935eed5d0-kube-api-access-h8g2b\") pod \"node-resolver-7cqb6\" (UID: \"f740204d-ae80-410c-85a7-d7e935eed5d0\") " pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.968716 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.972124 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.972499 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:30 crc kubenswrapper[5004]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 18:52:30 crc kubenswrapper[5004]: set -o allexport Dec 08 18:52:30 crc kubenswrapper[5004]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 18:52:30 crc kubenswrapper[5004]: source /etc/kubernetes/apiserver-url.env Dec 08 18:52:30 crc kubenswrapper[5004]: else Dec 08 18:52:30 crc kubenswrapper[5004]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 18:52:30 crc kubenswrapper[5004]: exit 1 Dec 08 18:52:30 crc kubenswrapper[5004]: fi Dec 08 18:52:30 crc kubenswrapper[5004]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 18:52:30 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:30 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:30 crc kubenswrapper[5004]: W1208 18:52:30.972933 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-bc33ec31f003de50b2d5d721ac0789ec6f404620f817f3adb51e6692c13c38ed WatchSource:0}: Error finding container bc33ec31f003de50b2d5d721ac0789ec6f404620f817f3adb51e6692c13c38ed: Status 404 returned error can't find the container with id bc33ec31f003de50b2d5d721ac0789ec6f404620f817f3adb51e6692c13c38ed Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.973737 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.976554 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.977231 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:30 crc kubenswrapper[5004]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:52:30 crc kubenswrapper[5004]: if [[ -f "/env/_master" ]]; then Dec 08 18:52:30 crc kubenswrapper[5004]: set -o allexport Dec 08 18:52:30 crc kubenswrapper[5004]: source "/env/_master" Dec 08 18:52:30 crc kubenswrapper[5004]: set +o allexport Dec 08 18:52:30 crc kubenswrapper[5004]: fi Dec 08 18:52:30 crc kubenswrapper[5004]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 18:52:30 crc kubenswrapper[5004]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 18:52:30 crc kubenswrapper[5004]: ho_enable="--enable-hybrid-overlay" Dec 08 18:52:30 crc kubenswrapper[5004]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 18:52:30 crc kubenswrapper[5004]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 18:52:30 crc kubenswrapper[5004]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 18:52:30 crc kubenswrapper[5004]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 18:52:30 crc kubenswrapper[5004]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 18:52:30 crc kubenswrapper[5004]: --webhook-host=127.0.0.1 \ Dec 08 18:52:30 crc kubenswrapper[5004]: --webhook-port=9743 \ Dec 08 18:52:30 crc kubenswrapper[5004]: ${ho_enable} \ Dec 08 18:52:30 crc kubenswrapper[5004]: --enable-interconnect \ Dec 08 18:52:30 crc kubenswrapper[5004]: --disable-approver \ Dec 08 18:52:30 crc kubenswrapper[5004]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 18:52:30 crc kubenswrapper[5004]: --wait-for-kubernetes-api=200s \ Dec 08 18:52:30 crc kubenswrapper[5004]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 18:52:30 crc kubenswrapper[5004]: --loglevel="${LOGLEVEL}" Dec 08 18:52:30 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:30 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.977301 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.979740 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:30 crc kubenswrapper[5004]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:52:30 crc kubenswrapper[5004]: if [[ -f "/env/_master" ]]; then Dec 08 18:52:30 crc kubenswrapper[5004]: set -o allexport Dec 08 18:52:30 crc kubenswrapper[5004]: source "/env/_master" Dec 08 18:52:30 crc kubenswrapper[5004]: set +o allexport Dec 08 18:52:30 crc kubenswrapper[5004]: fi Dec 08 18:52:30 crc kubenswrapper[5004]: Dec 08 18:52:30 crc kubenswrapper[5004]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 18:52:30 crc kubenswrapper[5004]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 18:52:30 crc kubenswrapper[5004]: --disable-webhook \ Dec 08 18:52:30 crc kubenswrapper[5004]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 18:52:30 crc kubenswrapper[5004]: --loglevel="${LOGLEVEL}" Dec 08 18:52:30 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:30 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.980795 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.982430 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.987686 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qxdkt" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.992343 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:30 crc kubenswrapper[5004]: W1208 18:52:30.992820 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-785980c4713c77ad602f2a83867f1720cba767a25fbfa3f3b9e0f7ba8301e82d WatchSource:0}: Error finding container 785980c4713c77ad602f2a83867f1720cba767a25fbfa3f3b9e0f7ba8301e82d: Status 404 returned error can't find the container with id 785980c4713c77ad602f2a83867f1720cba767a25fbfa3f3b9e0f7ba8301e82d Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.998984 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8m8\" (UniqueName: \"kubernetes.io/projected/02dfac61-6fa6-441d-83f2-c2f275a144e8-kube-api-access-8l8m8\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999066 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999099 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999127 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4b57acd8-c7ba-499a-8742-2a6fb585c7de-serviceca\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999188 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999211 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xfns\" (UniqueName: \"kubernetes.io/projected/4b57acd8-c7ba-499a-8742-2a6fb585c7de-kube-api-access-5xfns\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999275 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b57acd8-c7ba-499a-8742-2a6fb585c7de-host\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999300 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:30 crc kubenswrapper[5004]: E1208 18:52:30.998961 5004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999374 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999388 5004 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999421 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999434 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999446 5004 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999458 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999470 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:30 crc kubenswrapper[5004]: I1208 18:52:30.999505 5004 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999519 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999531 5004 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999542 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999555 5004 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999591 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999604 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999617 5004 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999629 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999659 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999673 5004 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999684 5004 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999698 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999709 5004 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999770 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b57acd8-c7ba-499a-8742-2a6fb585c7de-host\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.001646 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002124 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4b57acd8-c7ba-499a-8742-2a6fb585c7de-serviceca\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002232 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:30.999850 5004 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002414 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002430 5004 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002443 5004 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002457 5004 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002469 5004 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002502 5004 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002514 5004 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002536 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002546 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002577 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002589 5004 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002604 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002615 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002627 5004 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003116 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003129 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003141 5004 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003154 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003184 5004 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003195 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003216 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003228 5004 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003240 5004 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003252 5004 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003264 5004 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003275 5004 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003286 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003298 5004 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003309 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003322 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003334 5004 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003346 5004 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003358 5004 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003369 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003400 5004 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003412 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003426 5004 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003438 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003449 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003461 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003479 5004 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003506 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003518 5004 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003530 5004 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003542 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003554 5004 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003567 5004 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003580 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003591 5004 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003604 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.002989 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003615 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003629 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003642 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003655 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003668 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003682 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003697 5004 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003721 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003733 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.003745 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004158 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004197 5004 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004211 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004224 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004236 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004249 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004261 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004274 5004 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004287 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004300 5004 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004313 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004325 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004338 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004385 5004 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004399 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004413 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004426 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004439 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.004452 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.007135 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:31 crc kubenswrapper[5004]: W1208 18:52:31.007535 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode00ae10b_1af7_4d7e_aad6_135dac0d2aa5.slice/crio-5d9b634459a62fba282768a09df36c4cfa4dc1ba389d18924ca3217339190876 WatchSource:0}: Error finding container 5d9b634459a62fba282768a09df36c4cfa4dc1ba389d18924ca3217339190876: Status 404 returned error can't find the container with id 5d9b634459a62fba282768a09df36c4cfa4dc1ba389d18924ca3217339190876 Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.009371 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.014111 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.016356 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.017983 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xfns\" (UniqueName: \"kubernetes.io/projected/4b57acd8-c7ba-499a-8742-2a6fb585c7de-kube-api-access-5xfns\") pod \"node-ca-67htd\" (UID: \"4b57acd8-c7ba-499a-8742-2a6fb585c7de\") " pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.018203 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8m8\" (UniqueName: \"kubernetes.io/projected/02dfac61-6fa6-441d-83f2-c2f275a144e8-kube-api-access-8l8m8\") pod \"ovnkube-control-plane-57b78d8988-c924z\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.019295 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 18:52:31 crc kubenswrapper[5004]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 18:52:31 crc kubenswrapper[5004]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blpqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-qxdkt_openshift-multus(e00ae10b-1af7-4d7e-aad6-135dac0d2aa5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.021555 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-qxdkt" podUID="e00ae10b-1af7-4d7e-aad6-135dac0d2aa5" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.022110 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.029120 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.029182 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.029197 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.029214 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.029227 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.030136 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: W1208 18:52:31.032417 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea6c2cb7_5c47_47a3_b87e_fc8544207aa8.slice/crio-5a1975d5d45b392de9f069445261f1a3873605d34aad4915088531538c96380b WatchSource:0}: Error finding container 5a1975d5d45b392de9f069445261f1a3873605d34aad4915088531538c96380b: Status 404 returned error can't find the container with id 5a1975d5d45b392de9f069445261f1a3873605d34aad4915088531538c96380b Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.032851 5004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqlsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-xnzfz_openshift-machine-config-operator(5db7afc3-55ae-4aa9-9946-c263aeffae20): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.034119 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7cqb6" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.034545 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 18:52:31 crc kubenswrapper[5004]: apiVersion: v1 Dec 08 18:52:31 crc kubenswrapper[5004]: clusters: Dec 08 18:52:31 crc kubenswrapper[5004]: - cluster: Dec 08 18:52:31 crc kubenswrapper[5004]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 18:52:31 crc kubenswrapper[5004]: server: https://api-int.crc.testing:6443 Dec 08 18:52:31 crc kubenswrapper[5004]: name: default-cluster Dec 08 18:52:31 crc kubenswrapper[5004]: contexts: Dec 08 18:52:31 crc kubenswrapper[5004]: - context: Dec 08 18:52:31 crc kubenswrapper[5004]: cluster: default-cluster Dec 08 18:52:31 crc kubenswrapper[5004]: namespace: default Dec 08 18:52:31 crc kubenswrapper[5004]: user: default-auth Dec 08 18:52:31 crc kubenswrapper[5004]: name: default-context Dec 08 18:52:31 crc kubenswrapper[5004]: current-context: default-context Dec 08 18:52:31 crc kubenswrapper[5004]: kind: Config Dec 08 18:52:31 crc kubenswrapper[5004]: preferences: {} Dec 08 18:52:31 crc kubenswrapper[5004]: users: Dec 08 18:52:31 crc kubenswrapper[5004]: - name: default-auth Dec 08 18:52:31 crc kubenswrapper[5004]: user: Dec 08 18:52:31 crc kubenswrapper[5004]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 18:52:31 crc kubenswrapper[5004]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 18:52:31 crc kubenswrapper[5004]: EOF Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6ntv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-dmsk4_openshift-ovn-kubernetes(ea6c2cb7-5c47-47a3-b87e-fc8544207aa8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.034955 5004 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6w87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-q4dd6_openshift-multus(5285d47c-a794-4eb8-a948-e1f8a9e64ec8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.035637 5004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqlsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-xnzfz_openshift-machine-config-operator(5db7afc3-55ae-4aa9-9946-c263aeffae20): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.035700 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.036359 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" podUID="5285d47c-a794-4eb8-a948-e1f8a9e64ec8" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.038155 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.043790 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.053195 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 18:52:31 crc kubenswrapper[5004]: set -uo pipefail Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 18:52:31 crc kubenswrapper[5004]: HOSTS_FILE="/etc/hosts" Dec 08 18:52:31 crc kubenswrapper[5004]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Make a temporary file with the old hosts file's attributes. Dec 08 18:52:31 crc kubenswrapper[5004]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 18:52:31 crc kubenswrapper[5004]: echo "Failed to preserve hosts file. Exiting." Dec 08 18:52:31 crc kubenswrapper[5004]: exit 1 Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: while true; do Dec 08 18:52:31 crc kubenswrapper[5004]: declare -A svc_ips Dec 08 18:52:31 crc kubenswrapper[5004]: for svc in "${services[@]}"; do Dec 08 18:52:31 crc kubenswrapper[5004]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 18:52:31 crc kubenswrapper[5004]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 18:52:31 crc kubenswrapper[5004]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 18:52:31 crc kubenswrapper[5004]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 18:52:31 crc kubenswrapper[5004]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:52:31 crc kubenswrapper[5004]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:52:31 crc kubenswrapper[5004]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:52:31 crc kubenswrapper[5004]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 18:52:31 crc kubenswrapper[5004]: for i in ${!cmds[*]} Dec 08 18:52:31 crc kubenswrapper[5004]: do Dec 08 18:52:31 crc kubenswrapper[5004]: ips=($(eval "${cmds[i]}")) Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: svc_ips["${svc}"]="${ips[@]}" Dec 08 18:52:31 crc kubenswrapper[5004]: break Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Update /etc/hosts only if we get valid service IPs Dec 08 18:52:31 crc kubenswrapper[5004]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 18:52:31 crc kubenswrapper[5004]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 18:52:31 crc kubenswrapper[5004]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 18:52:31 crc kubenswrapper[5004]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 60 & wait Dec 08 18:52:31 crc kubenswrapper[5004]: continue Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Append resolver entries for services Dec 08 18:52:31 crc kubenswrapper[5004]: rc=0 Dec 08 18:52:31 crc kubenswrapper[5004]: for svc in "${!svc_ips[@]}"; do Dec 08 18:52:31 crc kubenswrapper[5004]: for ip in ${svc_ips[${svc}]}; do Dec 08 18:52:31 crc kubenswrapper[5004]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ $rc -ne 0 ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 60 & wait Dec 08 18:52:31 crc kubenswrapper[5004]: continue Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 18:52:31 crc kubenswrapper[5004]: # Replace /etc/hosts with our modified version if needed Dec 08 18:52:31 crc kubenswrapper[5004]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 18:52:31 crc kubenswrapper[5004]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 60 & wait Dec 08 18:52:31 crc kubenswrapper[5004]: unset svc_ips Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8g2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-7cqb6_openshift-dns(f740204d-ae80-410c-85a7-d7e935eed5d0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.054818 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-7cqb6" podUID="f740204d-ae80-410c-85a7-d7e935eed5d0" Dec 08 18:52:31 crc kubenswrapper[5004]: W1208 18:52:31.056548 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02dfac61_6fa6_441d_83f2_c2f275a144e8.slice/crio-5a79e581d21599ec84be76f0a22bbf7585bc944571f9dcbdfd8069ed2238e0aa WatchSource:0}: Error finding container 5a79e581d21599ec84be76f0a22bbf7585bc944571f9dcbdfd8069ed2238e0aa: Status 404 returned error can't find the container with id 5a79e581d21599ec84be76f0a22bbf7585bc944571f9dcbdfd8069ed2238e0aa Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.058868 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 18:52:31 crc kubenswrapper[5004]: set -euo pipefail Dec 08 18:52:31 crc kubenswrapper[5004]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 18:52:31 crc kubenswrapper[5004]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 18:52:31 crc kubenswrapper[5004]: # As the secret mount is optional we must wait for the files to be present. Dec 08 18:52:31 crc kubenswrapper[5004]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 18:52:31 crc kubenswrapper[5004]: TS=$(date +%s) Dec 08 18:52:31 crc kubenswrapper[5004]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 18:52:31 crc kubenswrapper[5004]: HAS_LOGGED_INFO=0 Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: log_missing_certs(){ Dec 08 18:52:31 crc kubenswrapper[5004]: CUR_TS=$(date +%s) Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 18:52:31 crc kubenswrapper[5004]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 18:52:31 crc kubenswrapper[5004]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 18:52:31 crc kubenswrapper[5004]: HAS_LOGGED_INFO=1 Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: } Dec 08 18:52:31 crc kubenswrapper[5004]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 18:52:31 crc kubenswrapper[5004]: log_missing_certs Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 5 Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 18:52:31 crc kubenswrapper[5004]: exec /usr/bin/kube-rbac-proxy \ Dec 08 18:52:31 crc kubenswrapper[5004]: --logtostderr \ Dec 08 18:52:31 crc kubenswrapper[5004]: --secure-listen-address=:9108 \ Dec 08 18:52:31 crc kubenswrapper[5004]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 18:52:31 crc kubenswrapper[5004]: --upstream=http://127.0.0.1:29108/ \ Dec 08 18:52:31 crc kubenswrapper[5004]: --tls-private-key-file=${TLS_PK} \ Dec 08 18:52:31 crc kubenswrapper[5004]: --tls-cert-file=${TLS_CERT} Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l8m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-c924z_openshift-ovn-kubernetes(02dfac61-6fa6-441d-83f2-c2f275a144e8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.061416 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ -f "/env/_master" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: set -o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: source "/env/_master" Dec 08 18:52:31 crc kubenswrapper[5004]: set +o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v4_join_subnet_opt= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "" != "" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v6_join_subnet_opt= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "" != "" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v4_transit_switch_subnet_opt= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "" != "" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v6_transit_switch_subnet_opt= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "" != "" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: dns_name_resolver_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "false" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # This is needed so that converting clusters from GA to TP Dec 08 18:52:31 crc kubenswrapper[5004]: # will rollout control plane pods as well Dec 08 18:52:31 crc kubenswrapper[5004]: network_segmentation_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "true" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_enabled_flag="--enable-multi-network" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "true" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "true" != "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_enabled_flag="--enable-multi-network" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: route_advertisements_enable_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "false" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: preconfigured_udn_addresses_enable_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "false" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_policy_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "false" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 18:52:31 crc kubenswrapper[5004]: admin_network_policy_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "true" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: if [ "shared" == "shared" ]; then Dec 08 18:52:31 crc kubenswrapper[5004]: gateway_mode_flags="--gateway-mode shared" Dec 08 18:52:31 crc kubenswrapper[5004]: elif [ "shared" == "local" ]; then Dec 08 18:52:31 crc kubenswrapper[5004]: gateway_mode_flags="--gateway-mode local" Dec 08 18:52:31 crc kubenswrapper[5004]: else Dec 08 18:52:31 crc kubenswrapper[5004]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 18:52:31 crc kubenswrapper[5004]: exit 1 Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 18:52:31 crc kubenswrapper[5004]: exec /usr/bin/ovnkube \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-interconnect \ Dec 08 18:52:31 crc kubenswrapper[5004]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 18:52:31 crc kubenswrapper[5004]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --metrics-enable-pprof \ Dec 08 18:52:31 crc kubenswrapper[5004]: --metrics-enable-config-duration \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ovn_v4_join_subnet_opt} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ovn_v6_join_subnet_opt} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${dns_name_resolver_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${persistent_ips_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${multi_network_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${network_segmentation_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${gateway_mode_flags} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${route_advertisements_enable_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-egress-ip=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-egress-firewall=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-egress-qos=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-egress-service=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-multicast \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-multi-external-gateway=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${multi_network_policy_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${admin_network_policy_enabled_flag} Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l8m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-c924z_openshift-ovn-kubernetes(02dfac61-6fa6-441d-83f2-c2f275a144e8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.061990 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-67htd" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.062536 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.065173 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" event={"ID":"5285d47c-a794-4eb8-a948-e1f8a9e64ec8","Type":"ContainerStarted","Data":"31e5c5f6a7ee230566aa1d3e8008a2e32a48e392ea52b2508bf6bac9bdc1a2f5"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.065762 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerStarted","Data":"bca90a7128145aa9d2c0ac0c9c2856bfeec4b52697437bbe549f1ac1469e5c21"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.066623 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxdkt" event={"ID":"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5","Type":"ContainerStarted","Data":"5d9b634459a62fba282768a09df36c4cfa4dc1ba389d18924ca3217339190876"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.068344 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" event={"ID":"02dfac61-6fa6-441d-83f2-c2f275a144e8","Type":"ContainerStarted","Data":"5a79e581d21599ec84be76f0a22bbf7585bc944571f9dcbdfd8069ed2238e0aa"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.070657 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7cqb6" event={"ID":"f740204d-ae80-410c-85a7-d7e935eed5d0","Type":"ContainerStarted","Data":"20bd803ff8dd855a260552d12cb684119e8aceff94a130e7956124d023136d2f"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.071895 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"5a1975d5d45b392de9f069445261f1a3873605d34aad4915088531538c96380b"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.072818 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"785980c4713c77ad602f2a83867f1720cba767a25fbfa3f3b9e0f7ba8301e82d"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.074025 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"bc33ec31f003de50b2d5d721ac0789ec6f404620f817f3adb51e6692c13c38ed"} Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.075817 5004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqlsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-xnzfz_openshift-machine-config-operator(5db7afc3-55ae-4aa9-9946-c263aeffae20): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.076246 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 18:52:31 crc kubenswrapper[5004]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 18:52:31 crc kubenswrapper[5004]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blpqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-qxdkt_openshift-multus(e00ae10b-1af7-4d7e-aad6-135dac0d2aa5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.076482 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 18:52:31 crc kubenswrapper[5004]: set -uo pipefail Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 18:52:31 crc kubenswrapper[5004]: HOSTS_FILE="/etc/hosts" Dec 08 18:52:31 crc kubenswrapper[5004]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Make a temporary file with the old hosts file's attributes. Dec 08 18:52:31 crc kubenswrapper[5004]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 18:52:31 crc kubenswrapper[5004]: echo "Failed to preserve hosts file. Exiting." Dec 08 18:52:31 crc kubenswrapper[5004]: exit 1 Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: while true; do Dec 08 18:52:31 crc kubenswrapper[5004]: declare -A svc_ips Dec 08 18:52:31 crc kubenswrapper[5004]: for svc in "${services[@]}"; do Dec 08 18:52:31 crc kubenswrapper[5004]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 18:52:31 crc kubenswrapper[5004]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 18:52:31 crc kubenswrapper[5004]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 18:52:31 crc kubenswrapper[5004]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 18:52:31 crc kubenswrapper[5004]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:52:31 crc kubenswrapper[5004]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:52:31 crc kubenswrapper[5004]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:52:31 crc kubenswrapper[5004]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 18:52:31 crc kubenswrapper[5004]: for i in ${!cmds[*]} Dec 08 18:52:31 crc kubenswrapper[5004]: do Dec 08 18:52:31 crc kubenswrapper[5004]: ips=($(eval "${cmds[i]}")) Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: svc_ips["${svc}"]="${ips[@]}" Dec 08 18:52:31 crc kubenswrapper[5004]: break Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Update /etc/hosts only if we get valid service IPs Dec 08 18:52:31 crc kubenswrapper[5004]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 18:52:31 crc kubenswrapper[5004]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 18:52:31 crc kubenswrapper[5004]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 18:52:31 crc kubenswrapper[5004]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 60 & wait Dec 08 18:52:31 crc kubenswrapper[5004]: continue Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Append resolver entries for services Dec 08 18:52:31 crc kubenswrapper[5004]: rc=0 Dec 08 18:52:31 crc kubenswrapper[5004]: for svc in "${!svc_ips[@]}"; do Dec 08 18:52:31 crc kubenswrapper[5004]: for ip in ${svc_ips[${svc}]}; do Dec 08 18:52:31 crc kubenswrapper[5004]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ $rc -ne 0 ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 60 & wait Dec 08 18:52:31 crc kubenswrapper[5004]: continue Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 18:52:31 crc kubenswrapper[5004]: # Replace /etc/hosts with our modified version if needed Dec 08 18:52:31 crc kubenswrapper[5004]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 18:52:31 crc kubenswrapper[5004]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 60 & wait Dec 08 18:52:31 crc kubenswrapper[5004]: unset svc_ips Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8g2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-7cqb6_openshift-dns(f740204d-ae80-410c-85a7-d7e935eed5d0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.076747 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 18:52:31 crc kubenswrapper[5004]: apiVersion: v1 Dec 08 18:52:31 crc kubenswrapper[5004]: clusters: Dec 08 18:52:31 crc kubenswrapper[5004]: - cluster: Dec 08 18:52:31 crc kubenswrapper[5004]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 18:52:31 crc kubenswrapper[5004]: server: https://api-int.crc.testing:6443 Dec 08 18:52:31 crc kubenswrapper[5004]: name: default-cluster Dec 08 18:52:31 crc kubenswrapper[5004]: contexts: Dec 08 18:52:31 crc kubenswrapper[5004]: - context: Dec 08 18:52:31 crc kubenswrapper[5004]: cluster: default-cluster Dec 08 18:52:31 crc kubenswrapper[5004]: namespace: default Dec 08 18:52:31 crc kubenswrapper[5004]: user: default-auth Dec 08 18:52:31 crc kubenswrapper[5004]: name: default-context Dec 08 18:52:31 crc kubenswrapper[5004]: current-context: default-context Dec 08 18:52:31 crc kubenswrapper[5004]: kind: Config Dec 08 18:52:31 crc kubenswrapper[5004]: preferences: {} Dec 08 18:52:31 crc kubenswrapper[5004]: users: Dec 08 18:52:31 crc kubenswrapper[5004]: - name: default-auth Dec 08 18:52:31 crc kubenswrapper[5004]: user: Dec 08 18:52:31 crc kubenswrapper[5004]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 18:52:31 crc kubenswrapper[5004]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 18:52:31 crc kubenswrapper[5004]: EOF Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6ntv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-dmsk4_openshift-ovn-kubernetes(ea6c2cb7-5c47-47a3-b87e-fc8544207aa8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.076975 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 18:52:31 crc kubenswrapper[5004]: set -euo pipefail Dec 08 18:52:31 crc kubenswrapper[5004]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 18:52:31 crc kubenswrapper[5004]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 18:52:31 crc kubenswrapper[5004]: # As the secret mount is optional we must wait for the files to be present. Dec 08 18:52:31 crc kubenswrapper[5004]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 18:52:31 crc kubenswrapper[5004]: TS=$(date +%s) Dec 08 18:52:31 crc kubenswrapper[5004]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 18:52:31 crc kubenswrapper[5004]: HAS_LOGGED_INFO=0 Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: log_missing_certs(){ Dec 08 18:52:31 crc kubenswrapper[5004]: CUR_TS=$(date +%s) Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 18:52:31 crc kubenswrapper[5004]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 18:52:31 crc kubenswrapper[5004]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 18:52:31 crc kubenswrapper[5004]: HAS_LOGGED_INFO=1 Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: } Dec 08 18:52:31 crc kubenswrapper[5004]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 18:52:31 crc kubenswrapper[5004]: log_missing_certs Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 5 Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 18:52:31 crc kubenswrapper[5004]: exec /usr/bin/kube-rbac-proxy \ Dec 08 18:52:31 crc kubenswrapper[5004]: --logtostderr \ Dec 08 18:52:31 crc kubenswrapper[5004]: --secure-listen-address=:9108 \ Dec 08 18:52:31 crc kubenswrapper[5004]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 18:52:31 crc kubenswrapper[5004]: --upstream=http://127.0.0.1:29108/ \ Dec 08 18:52:31 crc kubenswrapper[5004]: --tls-private-key-file=${TLS_PK} \ Dec 08 18:52:31 crc kubenswrapper[5004]: --tls-cert-file=${TLS_CERT} Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l8m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-c924z_openshift-ovn-kubernetes(02dfac61-6fa6-441d-83f2-c2f275a144e8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.077188 5004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.077326 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ -f "/env/_master" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: set -o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: source "/env/_master" Dec 08 18:52:31 crc kubenswrapper[5004]: set +o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 18:52:31 crc kubenswrapper[5004]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 18:52:31 crc kubenswrapper[5004]: ho_enable="--enable-hybrid-overlay" Dec 08 18:52:31 crc kubenswrapper[5004]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 18:52:31 crc kubenswrapper[5004]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 18:52:31 crc kubenswrapper[5004]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 18:52:31 crc kubenswrapper[5004]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 18:52:31 crc kubenswrapper[5004]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --webhook-host=127.0.0.1 \ Dec 08 18:52:31 crc kubenswrapper[5004]: --webhook-port=9743 \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ho_enable} \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-interconnect \ Dec 08 18:52:31 crc kubenswrapper[5004]: --disable-approver \ Dec 08 18:52:31 crc kubenswrapper[5004]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --wait-for-kubernetes-api=200s \ Dec 08 18:52:31 crc kubenswrapper[5004]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --loglevel="${LOGLEVEL}" Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.077392 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-qxdkt" podUID="e00ae10b-1af7-4d7e-aad6-135dac0d2aa5" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.077831 5004 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6w87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-q4dd6_openshift-multus(5285d47c-a794-4eb8-a948-e1f8a9e64ec8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.077880 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.077897 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-7cqb6" podUID="f740204d-ae80-410c-85a7-d7e935eed5d0" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.078234 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.078654 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d29f9ebb3392b3d36e5583ee88f717da1f5ee386abf23bac183f75265af67161"} Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.079298 5004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqlsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-xnzfz_openshift-machine-config-operator(5db7afc3-55ae-4aa9-9946-c263aeffae20): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.079635 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ -f "/env/_master" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: set -o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: source "/env/_master" Dec 08 18:52:31 crc kubenswrapper[5004]: set +o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v4_join_subnet_opt= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "" != "" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v6_join_subnet_opt= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "" != "" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v4_transit_switch_subnet_opt= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "" != "" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v6_transit_switch_subnet_opt= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "" != "" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: dns_name_resolver_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "false" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # This is needed so that converting clusters from GA to TP Dec 08 18:52:31 crc kubenswrapper[5004]: # will rollout control plane pods as well Dec 08 18:52:31 crc kubenswrapper[5004]: network_segmentation_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "true" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_enabled_flag="--enable-multi-network" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "true" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "true" != "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_enabled_flag="--enable-multi-network" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: route_advertisements_enable_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "false" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: preconfigured_udn_addresses_enable_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "false" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_policy_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "false" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 18:52:31 crc kubenswrapper[5004]: admin_network_policy_enabled_flag= Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ "true" == "true" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: if [ "shared" == "shared" ]; then Dec 08 18:52:31 crc kubenswrapper[5004]: gateway_mode_flags="--gateway-mode shared" Dec 08 18:52:31 crc kubenswrapper[5004]: elif [ "shared" == "local" ]; then Dec 08 18:52:31 crc kubenswrapper[5004]: gateway_mode_flags="--gateway-mode local" Dec 08 18:52:31 crc kubenswrapper[5004]: else Dec 08 18:52:31 crc kubenswrapper[5004]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 18:52:31 crc kubenswrapper[5004]: exit 1 Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 18:52:31 crc kubenswrapper[5004]: exec /usr/bin/ovnkube \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-interconnect \ Dec 08 18:52:31 crc kubenswrapper[5004]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 18:52:31 crc kubenswrapper[5004]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --metrics-enable-pprof \ Dec 08 18:52:31 crc kubenswrapper[5004]: --metrics-enable-config-duration \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ovn_v4_join_subnet_opt} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ovn_v6_join_subnet_opt} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${dns_name_resolver_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${persistent_ips_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${multi_network_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${network_segmentation_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${gateway_mode_flags} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${route_advertisements_enable_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-egress-ip=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-egress-firewall=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-egress-qos=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-egress-service=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-multicast \ Dec 08 18:52:31 crc kubenswrapper[5004]: --enable-multi-external-gateway=true \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${multi_network_policy_enabled_flag} \ Dec 08 18:52:31 crc kubenswrapper[5004]: ${admin_network_policy_enabled_flag} Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l8m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-c924z_openshift-ovn-kubernetes(02dfac61-6fa6-441d-83f2-c2f275a144e8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.079798 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ -f "/env/_master" ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: set -o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: source "/env/_master" Dec 08 18:52:31 crc kubenswrapper[5004]: set +o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: Dec 08 18:52:31 crc kubenswrapper[5004]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 18:52:31 crc kubenswrapper[5004]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 18:52:31 crc kubenswrapper[5004]: --disable-webhook \ Dec 08 18:52:31 crc kubenswrapper[5004]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 18:52:31 crc kubenswrapper[5004]: --loglevel="${LOGLEVEL}" Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.080046 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 18:52:31 crc kubenswrapper[5004]: set -o allexport Dec 08 18:52:31 crc kubenswrapper[5004]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 18:52:31 crc kubenswrapper[5004]: source /etc/kubernetes/apiserver-url.env Dec 08 18:52:31 crc kubenswrapper[5004]: else Dec 08 18:52:31 crc kubenswrapper[5004]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 18:52:31 crc kubenswrapper[5004]: exit 1 Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.080382 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" podUID="5285d47c-a794-4eb8-a948-e1f8a9e64ec8" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.080525 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.081039 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.081057 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.081125 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.084553 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: W1208 18:52:31.087609 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b57acd8_c7ba_499a_8742_2a6fb585c7de.slice/crio-7e50c83c3f1c27f8fd2cf0cf4658ed5e2f5db3589f0704465238b468e143e537 WatchSource:0}: Error finding container 7e50c83c3f1c27f8fd2cf0cf4658ed5e2f5db3589f0704465238b468e143e537: Status 404 returned error can't find the container with id 7e50c83c3f1c27f8fd2cf0cf4658ed5e2f5db3589f0704465238b468e143e537 Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.089794 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:31 crc kubenswrapper[5004]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 18:52:31 crc kubenswrapper[5004]: while [ true ]; Dec 08 18:52:31 crc kubenswrapper[5004]: do Dec 08 18:52:31 crc kubenswrapper[5004]: for f in $(ls /tmp/serviceca); do Dec 08 18:52:31 crc kubenswrapper[5004]: echo $f Dec 08 18:52:31 crc kubenswrapper[5004]: ca_file_path="/tmp/serviceca/${f}" Dec 08 18:52:31 crc kubenswrapper[5004]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 18:52:31 crc kubenswrapper[5004]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 18:52:31 crc kubenswrapper[5004]: if [ -e "${reg_dir_path}" ]; then Dec 08 18:52:31 crc kubenswrapper[5004]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 18:52:31 crc kubenswrapper[5004]: else Dec 08 18:52:31 crc kubenswrapper[5004]: mkdir $reg_dir_path Dec 08 18:52:31 crc kubenswrapper[5004]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: for d in $(ls /etc/docker/certs.d); do Dec 08 18:52:31 crc kubenswrapper[5004]: echo $d Dec 08 18:52:31 crc kubenswrapper[5004]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 18:52:31 crc kubenswrapper[5004]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 18:52:31 crc kubenswrapper[5004]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 18:52:31 crc kubenswrapper[5004]: rm -rf /etc/docker/certs.d/$d Dec 08 18:52:31 crc kubenswrapper[5004]: fi Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: sleep 60 & wait ${!} Dec 08 18:52:31 crc kubenswrapper[5004]: done Dec 08 18:52:31 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xfns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-67htd_openshift-image-registry(4b57acd8-c7ba-499a-8742-2a6fb585c7de): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:31 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.090993 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-67htd" podUID="4b57acd8-c7ba-499a-8742-2a6fb585c7de" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.099307 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.110315 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.117897 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.130312 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.131552 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.131577 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.131586 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.131601 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.131612 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.142392 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.151259 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.160802 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.172436 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.186819 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.195680 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.206970 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.217022 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.226539 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.233749 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.233789 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.233801 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.233819 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.233830 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.267995 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.308244 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.308285 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.308306 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.308339 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308457 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308472 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308482 5004 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308505 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308546 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308561 5004 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308563 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:32.308520127 +0000 UTC m=+85.957428435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308584 5004 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308636 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:32.308614651 +0000 UTC m=+85.957522999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308659 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:32.308647002 +0000 UTC m=+85.957555400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308711 5004 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.308814 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:32.308796935 +0000 UTC m=+85.957705243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.309832 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.336528 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.336574 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.336585 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.336603 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.336612 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.347661 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.396865 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.409456 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.409671 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:32.409655714 +0000 UTC m=+86.058564022 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.427283 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.439068 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.439151 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.439170 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.439193 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.439209 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.469166 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.507773 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.510261 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.510482 5004 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: E1208 18:52:31.510589 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs podName:89b69152-f317-4e7b-9215-fc6c71abc31f nodeName:}" failed. No retries permitted until 2025-12-08 18:52:32.510569245 +0000 UTC m=+86.159477553 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs") pod "network-metrics-daemon-7wmb8" (UID: "89b69152-f317-4e7b-9215-fc6c71abc31f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.545456 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.545492 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.545520 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.545537 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.545548 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.549635 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.589252 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.627371 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.647929 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.648007 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.648019 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.648038 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.648051 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.670010 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.708935 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.749586 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.749638 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.749648 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.749667 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.749678 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.751055 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.786503 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.829578 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.851823 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.851865 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.851878 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.851897 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.851909 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.868779 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.906003 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.946460 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.953704 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.953742 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.953754 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.953773 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.953785 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:31Z","lastTransitionTime":"2025-12-08T18:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:31 crc kubenswrapper[5004]: I1208 18:52:31.989579 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.027260 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.055451 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.055505 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.055521 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.055544 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.055561 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.072523 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.083326 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-67htd" event={"ID":"4b57acd8-c7ba-499a-8742-2a6fb585c7de","Type":"ContainerStarted","Data":"7e50c83c3f1c27f8fd2cf0cf4658ed5e2f5db3589f0704465238b468e143e537"} Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.085121 5004 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:52:32 crc kubenswrapper[5004]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 18:52:32 crc kubenswrapper[5004]: while [ true ]; Dec 08 18:52:32 crc kubenswrapper[5004]: do Dec 08 18:52:32 crc kubenswrapper[5004]: for f in $(ls /tmp/serviceca); do Dec 08 18:52:32 crc kubenswrapper[5004]: echo $f Dec 08 18:52:32 crc kubenswrapper[5004]: ca_file_path="/tmp/serviceca/${f}" Dec 08 18:52:32 crc kubenswrapper[5004]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 18:52:32 crc kubenswrapper[5004]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 18:52:32 crc kubenswrapper[5004]: if [ -e "${reg_dir_path}" ]; then Dec 08 18:52:32 crc kubenswrapper[5004]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 18:52:32 crc kubenswrapper[5004]: else Dec 08 18:52:32 crc kubenswrapper[5004]: mkdir $reg_dir_path Dec 08 18:52:32 crc kubenswrapper[5004]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 18:52:32 crc kubenswrapper[5004]: fi Dec 08 18:52:32 crc kubenswrapper[5004]: done Dec 08 18:52:32 crc kubenswrapper[5004]: for d in $(ls /etc/docker/certs.d); do Dec 08 18:52:32 crc kubenswrapper[5004]: echo $d Dec 08 18:52:32 crc kubenswrapper[5004]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 18:52:32 crc kubenswrapper[5004]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 18:52:32 crc kubenswrapper[5004]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 18:52:32 crc kubenswrapper[5004]: rm -rf /etc/docker/certs.d/$d Dec 08 18:52:32 crc kubenswrapper[5004]: fi Dec 08 18:52:32 crc kubenswrapper[5004]: done Dec 08 18:52:32 crc kubenswrapper[5004]: sleep 60 & wait ${!} Dec 08 18:52:32 crc kubenswrapper[5004]: done Dec 08 18:52:32 crc kubenswrapper[5004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xfns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-67htd_openshift-image-registry(4b57acd8-c7ba-499a-8742-2a6fb585c7de): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:52:32 crc kubenswrapper[5004]: > logger="UnhandledError" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.086564 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-67htd" podUID="4b57acd8-c7ba-499a-8742-2a6fb585c7de" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.105308 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.156348 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.157377 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.157601 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.157755 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.157905 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.158037 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.189117 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.226868 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.260653 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.260734 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.260753 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.260778 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.260795 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.267520 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.311460 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.319145 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.319202 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319392 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319397 5004 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319576 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:34.319541782 +0000 UTC m=+87.968450100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319418 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319635 5004 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319439 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319721 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319737 5004 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319782 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:34.319762659 +0000 UTC m=+87.968670967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.319808 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:34.31979944 +0000 UTC m=+87.968707748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.319444 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.320289 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.320576 5004 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.320721 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:34.320702009 +0000 UTC m=+87.969610317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.347848 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.362921 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.363210 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.363274 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.363376 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.363452 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.388484 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.421423 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.421579 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:34.421560428 +0000 UTC m=+88.070468736 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.431174 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.455650 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.455708 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.455725 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.455746 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.455762 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.465461 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.468621 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.468651 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.468659 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.468673 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.468686 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.469242 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.477004 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.479919 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.479943 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.479952 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.479966 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.479975 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.487642 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.490379 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.490408 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.490417 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.490430 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.490440 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.497748 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.500674 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.500711 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.500722 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.500738 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.500749 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.508320 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.509094 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.509363 5004 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.510418 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.510523 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.510581 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.510656 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.510717 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.522988 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.523131 5004 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.523211 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs podName:89b69152-f317-4e7b-9215-fc6c71abc31f nodeName:}" failed. No retries permitted until 2025-12-08 18:52:34.523196112 +0000 UTC m=+88.172104420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs") pod "network-metrics-daemon-7wmb8" (UID: "89b69152-f317-4e7b-9215-fc6c71abc31f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.546183 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.587709 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.613193 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.613480 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.613570 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.613636 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.613691 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.628813 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.666866 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.707094 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.709294 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.709395 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.709441 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.709553 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.709301 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.709715 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.709927 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:32 crc kubenswrapper[5004]: E1208 18:52:32.710252 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.713385 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.714376 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.715340 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.715398 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.715407 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.715423 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.715434 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.716606 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.717872 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.720220 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.721703 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.723064 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.724440 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.725240 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.726881 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.727807 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.729781 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.730630 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.732030 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.732615 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.733307 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.734498 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.735537 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.736688 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.737495 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.738508 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.740467 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.741767 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.742761 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.743986 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.745036 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.746004 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.747494 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.747761 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.750416 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.750978 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.752214 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.753365 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.755213 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.756026 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.757186 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.757759 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.758471 5004 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.758981 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.762025 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.763796 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.764896 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.766490 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.767039 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.768550 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.769356 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.769960 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.771060 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.771974 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.773241 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.774176 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.775199 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.775864 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.776949 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.777829 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.779352 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.780063 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.781324 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.782291 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.789356 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.817421 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.817705 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.817845 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.817957 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.818035 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.840473 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.866751 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.913393 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.919428 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.919459 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.919471 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.919487 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.919497 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:32Z","lastTransitionTime":"2025-12-08T18:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:32 crc kubenswrapper[5004]: I1208 18:52:32.948368 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.022172 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.022244 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.022268 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.022296 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.022318 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.125227 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.125531 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.125557 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.125741 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.125780 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.228577 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.228646 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.228669 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.228699 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.228722 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.330736 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.330826 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.330857 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.330888 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.330905 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.366857 5004 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.433112 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.433152 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.433163 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.433178 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.433188 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.535330 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.535826 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.536023 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.536269 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.536452 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.638894 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.638961 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.638980 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.639009 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.639026 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.741544 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.741582 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.741592 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.741607 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.741617 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.844701 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.844768 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.844787 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.844815 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.844833 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.947856 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.948269 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.948327 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.948346 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:33 crc kubenswrapper[5004]: I1208 18:52:33.948360 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:33Z","lastTransitionTime":"2025-12-08T18:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.050785 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.051399 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.051490 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.051569 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.051635 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.153832 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.154395 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.154619 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.154831 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.154978 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.257461 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.257507 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.257519 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.257535 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.257547 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.342576 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.342666 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.342695 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.342722 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.342822 5004 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.342890 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:38.342873164 +0000 UTC m=+91.991781472 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343293 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343318 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343330 5004 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343384 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:38.34336844 +0000 UTC m=+91.992276748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343413 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343467 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343475 5004 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343616 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:38.343590316 +0000 UTC m=+91.992498634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343483 5004 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.343755 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:38.343732081 +0000 UTC m=+91.992640389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.360066 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.360154 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.360191 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.360213 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.360226 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.444124 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.444243 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:38.444224958 +0000 UTC m=+92.093133256 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.462693 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.462737 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.462747 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.462762 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.462773 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.545020 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.545233 5004 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.545347 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs podName:89b69152-f317-4e7b-9215-fc6c71abc31f nodeName:}" failed. No retries permitted until 2025-12-08 18:52:38.545322534 +0000 UTC m=+92.194230842 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs") pod "network-metrics-daemon-7wmb8" (UID: "89b69152-f317-4e7b-9215-fc6c71abc31f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.565300 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.565385 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.565414 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.565445 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.565472 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.667366 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.667412 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.667423 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.667441 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.667452 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.709942 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.710116 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.709963 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.709942 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.710202 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.710217 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.710336 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:34 crc kubenswrapper[5004]: E1208 18:52:34.710424 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.770053 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.770350 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.770473 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.770605 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.770715 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.872619 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.872675 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.872691 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.872765 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.872814 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.974557 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.974596 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.974608 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.974625 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:34 crc kubenswrapper[5004]: I1208 18:52:34.974635 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:34Z","lastTransitionTime":"2025-12-08T18:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.077021 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.077255 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.077283 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.077302 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.077311 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.180054 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.180123 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.180135 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.180151 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.180162 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.282328 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.282372 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.282384 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.282400 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.282411 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.384219 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.384286 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.384295 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.384313 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.384324 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.486503 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.486572 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.486587 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.486602 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.486894 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.590054 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.590165 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.590191 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.590220 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.590243 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.692950 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.692987 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.692996 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.693009 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.693019 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.795054 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.795102 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.795112 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.795125 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.795134 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.897849 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.897903 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.897916 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.897932 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:35 crc kubenswrapper[5004]: I1208 18:52:35.897945 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:35Z","lastTransitionTime":"2025-12-08T18:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.000377 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.000442 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.000454 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.000471 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.000483 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.102303 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.102344 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.102353 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.102367 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.102376 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.204829 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.204877 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.204886 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.204900 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.204909 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.307526 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.307579 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.307596 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.307621 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.307637 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.410436 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.410507 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.410520 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.410543 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.410558 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.512606 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.512681 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.512693 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.512715 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.512727 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.615604 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.615763 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.615775 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.615796 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.615806 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.710106 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.710128 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.710150 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:36 crc kubenswrapper[5004]: E1208 18:52:36.710253 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:36 crc kubenswrapper[5004]: E1208 18:52:36.710708 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:36 crc kubenswrapper[5004]: E1208 18:52:36.710779 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.711400 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:36 crc kubenswrapper[5004]: E1208 18:52:36.711571 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.717678 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.717902 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.717962 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.718027 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.718109 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.720112 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.730292 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.743584 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.754458 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.764551 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.775359 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.785054 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.795453 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.809537 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.820944 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.821018 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.821034 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.821058 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.821095 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.821507 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.836905 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.845123 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.861180 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.870423 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.878904 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.886223 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.897151 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.908316 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.919929 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.924456 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.924563 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.924625 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.924712 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:36 crc kubenswrapper[5004]: I1208 18:52:36.924776 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:36Z","lastTransitionTime":"2025-12-08T18:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.027372 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.027431 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.027442 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.027463 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.027486 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.130388 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.131034 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.131176 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.131288 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.131392 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.234206 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.234263 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.234278 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.234300 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.234315 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.337186 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.337620 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.337717 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.337807 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.337902 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.440680 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.440737 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.440748 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.440774 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.440787 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.548226 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.548302 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.548315 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.548333 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.548345 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.650516 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.650547 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.650555 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.650574 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.650583 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.753275 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.753346 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.753358 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.753384 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.753397 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.857377 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.857434 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.857448 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.857469 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.857482 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.961610 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.961674 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.961686 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.961704 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:37 crc kubenswrapper[5004]: I1208 18:52:37.961716 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:37Z","lastTransitionTime":"2025-12-08T18:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.064152 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.064226 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.064247 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.064274 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.064289 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.167322 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.167366 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.167377 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.167391 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.167401 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.176392 5004 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.269578 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.269616 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.269628 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.269643 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.269655 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.372046 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.372123 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.372139 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.372159 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.372172 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.393453 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.393554 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.393598 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.393630 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393686 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393725 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393740 5004 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393754 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393769 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393779 5004 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393789 5004 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393813 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:46.393789382 +0000 UTC m=+100.042697700 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393833 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:46.393823823 +0000 UTC m=+100.042732261 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.393915 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:46.393841274 +0000 UTC m=+100.042749582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.394146 5004 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.394197 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:52:46.394183435 +0000 UTC m=+100.043091823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.474858 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.474944 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.474972 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.475000 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.475018 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.494311 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.494669 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:46.49462752 +0000 UTC m=+100.143535878 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.577249 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.577424 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.577442 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.577462 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.577474 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.596009 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.596189 5004 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.596263 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs podName:89b69152-f317-4e7b-9215-fc6c71abc31f nodeName:}" failed. No retries permitted until 2025-12-08 18:52:46.596244343 +0000 UTC m=+100.245152651 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs") pod "network-metrics-daemon-7wmb8" (UID: "89b69152-f317-4e7b-9215-fc6c71abc31f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.680349 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.680405 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.680415 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.680429 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.680437 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.710042 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.710144 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.710242 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.710251 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.710376 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.710504 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.710568 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:38 crc kubenswrapper[5004]: E1208 18:52:38.710657 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.783054 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.783132 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.783145 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.783164 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.783176 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.885153 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.885224 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.885236 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.885254 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.885266 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.988562 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.988615 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.988627 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.988646 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:38 crc kubenswrapper[5004]: I1208 18:52:38.988664 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:38Z","lastTransitionTime":"2025-12-08T18:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.091536 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.091572 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.091582 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.091596 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.091605 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.193929 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.194018 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.194030 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.194049 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.194060 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.296769 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.296822 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.296833 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.296851 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.296864 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.399145 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.399206 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.399222 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.399241 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.399252 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.501913 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.501963 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.501976 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.501992 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.502003 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.603921 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.603981 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.603998 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.604020 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.604045 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.707455 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.707512 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.707524 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.707542 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.707553 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.810051 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.810119 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.810131 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.810147 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.810158 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.912434 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.912804 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.912891 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.912965 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:39 crc kubenswrapper[5004]: I1208 18:52:39.913038 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:39Z","lastTransitionTime":"2025-12-08T18:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.015093 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.015499 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.015597 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.015706 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.015797 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.118238 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.118747 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.118879 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.118976 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.119228 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.221767 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.221849 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.221860 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.221876 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.221885 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.324220 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.324273 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.324283 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.324298 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.324308 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.427505 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.427596 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.427617 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.427645 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.427673 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.530012 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.530066 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.530102 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.530135 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.530151 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.632398 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.632435 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.632446 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.632463 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.632474 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.709899 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:40 crc kubenswrapper[5004]: E1208 18:52:40.710020 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.710258 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:40 crc kubenswrapper[5004]: E1208 18:52:40.710348 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.710374 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:40 crc kubenswrapper[5004]: E1208 18:52:40.710431 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.710467 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:40 crc kubenswrapper[5004]: E1208 18:52:40.710537 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.734714 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.735037 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.735268 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.735417 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.735537 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.837653 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.837947 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.838059 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.838416 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.838546 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.940894 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.940977 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.941005 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.941037 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:40 crc kubenswrapper[5004]: I1208 18:52:40.941061 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:40Z","lastTransitionTime":"2025-12-08T18:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.044388 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.044526 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.044576 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.044620 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.044666 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.158006 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.158043 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.158052 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.158065 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.158107 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.260805 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.261471 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.261568 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.261664 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.261756 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.364697 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.365045 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.365243 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.365375 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.365502 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.467950 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.468342 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.468506 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.468651 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.468788 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.572478 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.572563 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.572594 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.572625 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.572646 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.675438 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.675518 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.675537 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.675562 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.675580 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.778199 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.778481 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.778511 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.778543 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.778567 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.881142 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.881614 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.881629 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.881647 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.881662 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.983588 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.983647 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.983659 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.983679 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:41 crc kubenswrapper[5004]: I1208 18:52:41.983694 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:41Z","lastTransitionTime":"2025-12-08T18:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.086525 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.086571 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.086580 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.086594 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.086604 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.115254 5004 generic.go:358] "Generic (PLEG): container finished" podID="5285d47c-a794-4eb8-a948-e1f8a9e64ec8" containerID="f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84" exitCode=0 Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.115434 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" event={"ID":"5285d47c-a794-4eb8-a948-e1f8a9e64ec8","Type":"ContainerDied","Data":"f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.118109 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323" exitCode=0 Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.118179 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.132910 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.144123 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.156704 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.172534 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.184892 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.190842 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.190902 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.190916 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.190932 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.190943 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.200705 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.209388 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.219273 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.230293 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.239848 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.250798 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.263025 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.274209 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.292376 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.296210 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.296250 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.296261 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.296278 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.296294 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.300786 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.322169 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.334324 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.353389 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.366677 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.385901 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.398336 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.398391 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.398404 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.398420 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.398428 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.399036 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.411669 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.419730 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.431802 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.448142 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.459976 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.471632 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.482996 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.497331 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.500711 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.500747 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.500759 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.500775 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.500785 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.506190 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.515397 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.524090 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.534010 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.542390 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.551305 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.561832 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.577729 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.577772 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.577784 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.577801 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.577812 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.583642 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.588791 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.592779 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.592841 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.592850 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.592866 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.593066 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.595334 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.601468 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.607507 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.607555 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.607564 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.607580 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.607589 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.619367 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.623516 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.623652 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.623712 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.623771 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.623825 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.632340 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.635641 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.635781 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.635868 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.635974 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.636102 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.644495 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143984Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604784Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4b514b11-7c3d-40a7-962d-40f2ee014679\\\",\\\"systemUUID\\\":\\\"2a592c3d-8402-4b24-bfed-95916d7ee8fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.644614 5004 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.645422 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.645449 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.645460 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.645477 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.645489 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.711636 5004 scope.go:117] "RemoveContainer" containerID="43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209" Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.711805 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.712145 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.712203 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.712562 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.712627 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.712678 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.712729 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.713044 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:42 crc kubenswrapper[5004]: E1208 18:52:42.713122 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.756287 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.756334 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.756345 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.756370 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.756385 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.858755 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.859182 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.859196 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.859214 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.859227 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.961766 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.961805 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.961816 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.961835 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:42 crc kubenswrapper[5004]: I1208 18:52:42.961845 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:42Z","lastTransitionTime":"2025-12-08T18:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.064673 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.064735 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.064748 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.064771 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.064785 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.121181 5004 generic.go:358] "Generic (PLEG): container finished" podID="5285d47c-a794-4eb8-a948-e1f8a9e64ec8" containerID="aae4ea4c03d1ba100a5e385ddf39b503fcdf1c2bcc620eb4c07f56485abbb671" exitCode=0 Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.121261 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" event={"ID":"5285d47c-a794-4eb8-a948-e1f8a9e64ec8","Type":"ContainerDied","Data":"aae4ea4c03d1ba100a5e385ddf39b503fcdf1c2bcc620eb4c07f56485abbb671"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.127278 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.127329 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.127342 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.127354 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.127366 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.127378 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.128531 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7cqb6" event={"ID":"f740204d-ae80-410c-85a7-d7e935eed5d0","Type":"ContainerStarted","Data":"08613f7ddc94c00630501695ccfabe1403294e221d4b80b6c0c35bc7b7cf5404"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.138060 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.146203 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.156549 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.165564 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.175556 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.175630 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.175648 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.175667 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.175683 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.203236 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.214391 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.253588 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.282372 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.282463 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.282480 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.282502 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.282514 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.289054 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.300914 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.313240 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.329302 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.340321 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.349494 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.358775 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.370128 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.386129 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae4ea4c03d1ba100a5e385ddf39b503fcdf1c2bcc620eb4c07f56485abbb671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aae4ea4c03d1ba100a5e385ddf39b503fcdf1c2bcc620eb4c07f56485abbb671\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.386515 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.386540 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.386550 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.386567 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.386578 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.396552 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.408152 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.420460 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.431640 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.443611 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.452140 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-67htd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b57acd8-c7ba-499a-8742-2a6fb585c7de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xfns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-67htd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.465475 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e72fac8-ae14-48dc-b490-c2ed622b1496\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:12Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 18:52:12.370997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:12.371158 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:12.371965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3444613524/tls.crt::/tmp/serving-cert-3444613524/tls.key\\\\\\\"\\\\nI1208 18:52:12.804051 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:12.806014 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:12.806032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:12.806058 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:12.806089 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:12.810417 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 18:52:12.810442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:12.810454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:12.810457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:12.810461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:12.810465 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 18:52:12.811221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1208 18:52:12.811550 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.480014 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.489204 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.489248 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.489259 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.489276 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.489290 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.495619 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.506740 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.519663 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qxdkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-blpqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxdkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.535107 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5285d47c-a794-4eb8-a948-e1f8a9e64ec8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9901d38ba4a14ef98a6b2d64c3187e63d3113da8ef3a1182cd3579dd803fa84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae4ea4c03d1ba100a5e385ddf39b503fcdf1c2bcc620eb4c07f56485abbb671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aae4ea4c03d1ba100a5e385ddf39b503fcdf1c2bcc620eb4c07f56485abbb671\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6w87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q4dd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.544524 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a970011f-45a5-42cf-8cee-30ac5db79bcc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ceece6b84e9998f87c61e1f56040d646be12c971c4a0e174c436cef40ae90d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114bc5f03546669e16149b868cb0f1953fe7416833310ad60bd20f59f5fde9bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.556198 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8338b767-1190-4105-a541-e77d62cd5a2a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3d8270418b2788ebe71b2909d0b4abddc2244a70dc3605c5641d9c35b484b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://098dd62e95d20b99b01dd085ad8f9512bbbd707f3f7dbeeb36832d35d7e693d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2f97dd83a25fd213095dabcd8b83156891ccf4ed81eaaaa796e8481d2f2b9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a519d04afff3ab25e36643f06080648b1904b4324951b8d8342ed119710d33ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.568294 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.578687 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b69152-f317-4e7b-9215-fc6c71abc31f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkwxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wmb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.588892 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dfac61-6fa6-441d-83f2-c2f275a144e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8l8m8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-c924z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.592873 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.593008 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.593106 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.593217 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.593298 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.599140 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.610464 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5db7afc3-55ae-4aa9-9946-c263aeffae20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqlsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xnzfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.630839 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:52:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6ntv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dmsk4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.638962 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://08613f7ddc94c00630501695ccfabe1403294e221d4b80b6c0c35bc7b7cf5404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:52:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.659020 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.695771 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.695838 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.695852 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.695880 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.695893 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.798868 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.799577 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.799861 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.800175 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.800363 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.903224 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.903257 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.903266 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.903281 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:43 crc kubenswrapper[5004]: I1208 18:52:43.903291 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:43Z","lastTransitionTime":"2025-12-08T18:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.005401 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.005444 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.005455 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.005473 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.005483 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.107767 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.107816 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.107850 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.107868 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.107878 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.133360 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f565c2093076651f799232adbb1ce9365aa95eef77987f9e9cfd76a05843a741"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.136055 5004 generic.go:358] "Generic (PLEG): container finished" podID="5285d47c-a794-4eb8-a948-e1f8a9e64ec8" containerID="e1ea62717e4eb15ba1743cf360bd83a73f54e8ff45576e8aa41adb75148e41b5" exitCode=0 Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.136117 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" event={"ID":"5285d47c-a794-4eb8-a948-e1f8a9e64ec8","Type":"ContainerDied","Data":"e1ea62717e4eb15ba1743cf360bd83a73f54e8ff45576e8aa41adb75148e41b5"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.149315 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7cqb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f740204d-ae80-410c-85a7-d7e935eed5d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://08613f7ddc94c00630501695ccfabe1403294e221d4b80b6c0c35bc7b7cf5404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:52:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8g2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:52:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7cqb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.169544 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f1c89c8-a16d-4c49-90a7-82cb03f5bb40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac4895d52b9fcfb486a11df3773432f8831974230f588dfaa9e7f06495dc4924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b42892be32acaa7b06c6e857ec23f014b3e6c1970024e14ca02d95ae338ad6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc3484a5a5b984d0ac5c6af03b89d29740df4b01157f109fcf540169ce4f9202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5084ae4dcc071f842ec9f492c553c81126b630e04bead8b5a0119e7f4c135616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d688273cd1958b8f3d8aa55ece4cf4f308585f15078c95d35bf5da8d6992f15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f8ac69957a463c4e5352569aa5f28cfe065ff9c07bdfc7b4563ab831523b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ec3389f55e5af8397e4e4a46b8a04c1d812745334680ca43f8109d77a03823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1a2fb410b7a75e13dfc63969334f27449dc5ee53357fd55a37a1af4eb308d5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.181343 5004 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57538fe6-13b0-4e35-a865-b1d74615032a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbb2ea5a48b24ca25c3ac63554eb020e08c67e3226de5728eecd9bcf3cabbb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a9d26e3d4a02181df0e073c674b5d725a576016ad7e1dc5ab44c465a64e324e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8383febe745342cf35e7f98e208d86c5847e2fbebb4e996f633066fa72effb84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.210343 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.210395 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.210410 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.210428 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.210445 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.313649 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.313722 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.313741 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.313764 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.313781 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.361255 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=15.361226331 podStartE2EDuration="15.361226331s" podCreationTimestamp="2025-12-08 18:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:44.361095247 +0000 UTC m=+98.010003575" watchObservedRunningTime="2025-12-08 18:52:44.361226331 +0000 UTC m=+98.010134639" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.361704 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.361698176 podStartE2EDuration="15.361698176s" podCreationTimestamp="2025-12-08 18:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:44.34096184 +0000 UTC m=+97.989870168" watchObservedRunningTime="2025-12-08 18:52:44.361698176 +0000 UTC m=+98.010606484" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.419671 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.420211 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.420229 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.420252 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.420285 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.522834 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.522871 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.522880 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.522892 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.522903 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.625174 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.625219 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.625230 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.625247 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.625258 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.709444 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.709512 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.709470 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:44 crc kubenswrapper[5004]: E1208 18:52:44.709660 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:44 crc kubenswrapper[5004]: E1208 18:52:44.710112 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:44 crc kubenswrapper[5004]: E1208 18:52:44.710237 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.710306 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:44 crc kubenswrapper[5004]: E1208 18:52:44.710385 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.728283 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.728333 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.728360 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.728379 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.728391 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.831724 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.831776 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.831792 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.831807 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.831816 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.933852 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.933902 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.933912 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.933928 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:44 crc kubenswrapper[5004]: I1208 18:52:44.933938 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:44Z","lastTransitionTime":"2025-12-08T18:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.035976 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.036017 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.036029 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.036042 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.036050 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.140990 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.141024 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.141034 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.141049 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.141059 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.141766 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"aa69820ddc874c016c9f16a56009304817b046452d559bc58c83e0777b411ea5"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.142883 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"10d55277427765710250639b1501f8729363d5a80754926e2778d9c5b28a97e1"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.145621 5004 generic.go:358] "Generic (PLEG): container finished" podID="5285d47c-a794-4eb8-a948-e1f8a9e64ec8" containerID="833e569b4feaa362ff3e358a076a9398c0b42d0ef3b69db15f5a71df179441ed" exitCode=0 Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.145690 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" event={"ID":"5285d47c-a794-4eb8-a948-e1f8a9e64ec8","Type":"ContainerDied","Data":"833e569b4feaa362ff3e358a076a9398c0b42d0ef3b69db15f5a71df179441ed"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.147222 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerStarted","Data":"08c9f5be18c1003dee6bbc6e663d87ccd18c7f66d073d92e1806f90fbcbfb865"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.147260 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerStarted","Data":"aeeaf8c426d441fb729ffc2f1049f785259ca6b7e0ef2b9fe2cbdb0978a2ec65"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.149727 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"f6c2e165c5712a9fca7d28b625131754d5ea06b948f59970e735457e6249e4fa"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.162439 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" event={"ID":"02dfac61-6fa6-441d-83f2-c2f275a144e8","Type":"ContainerStarted","Data":"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.162521 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" event={"ID":"02dfac61-6fa6-441d-83f2-c2f275a144e8","Type":"ContainerStarted","Data":"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.165751 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.186672 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7cqb6" podStartSLOduration=78.186653276 podStartE2EDuration="1m18.186653276s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:45.186131 +0000 UTC m=+98.835039328" watchObservedRunningTime="2025-12-08 18:52:45.186653276 +0000 UTC m=+98.835561584" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.236851 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.236826987 podStartE2EDuration="16.236826987s" podCreationTimestamp="2025-12-08 18:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:45.236700403 +0000 UTC m=+98.885608731" watchObservedRunningTime="2025-12-08 18:52:45.236826987 +0000 UTC m=+98.885735295" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.248751 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.248798 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.248812 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.248834 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.248848 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.277569 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=16.277554585 podStartE2EDuration="16.277554585s" podCreationTimestamp="2025-12-08 18:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:45.276602255 +0000 UTC m=+98.925510583" watchObservedRunningTime="2025-12-08 18:52:45.277554585 +0000 UTC m=+98.926462893" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.356886 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.358157 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.358391 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.358479 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.358567 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.399406 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podStartSLOduration=78.399380807 podStartE2EDuration="1m18.399380807s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:45.398703976 +0000 UTC m=+99.047612304" watchObservedRunningTime="2025-12-08 18:52:45.399380807 +0000 UTC m=+99.048289125" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.399551 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" podStartSLOduration=77.399545982 podStartE2EDuration="1m17.399545982s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:45.372049319 +0000 UTC m=+99.020957637" watchObservedRunningTime="2025-12-08 18:52:45.399545982 +0000 UTC m=+99.048454290" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.461984 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.462057 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.462096 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.462128 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.462142 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.564185 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.564236 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.564249 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.564265 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.564281 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.666868 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.666937 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.666951 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.666970 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.666982 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.769136 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.769457 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.769468 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.769483 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.769491 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.872566 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.872613 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.872625 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.872643 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.872657 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.975271 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.975324 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.975342 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.975358 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:45 crc kubenswrapper[5004]: I1208 18:52:45.975367 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:45Z","lastTransitionTime":"2025-12-08T18:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.077398 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.077441 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.077452 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.077767 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.077804 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.180642 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.180721 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.180738 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.181153 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.181202 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.283092 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.283141 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.283154 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.283175 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.283187 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.386558 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.386596 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.386627 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.386643 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.386653 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.398780 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.398845 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.398876 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.398941 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399150 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399172 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399192 5004 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399273 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.399255414 +0000 UTC m=+116.048163722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399824 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399846 5004 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399855 5004 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399906 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.399895805 +0000 UTC m=+116.048804113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.399978 5004 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.400010 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.400001368 +0000 UTC m=+116.048909676 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.400062 5004 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.400131 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.400119992 +0000 UTC m=+116.049028300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.488889 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.488933 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.488949 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.488970 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.488985 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.500049 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.500398 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.500376811 +0000 UTC m=+116.149285119 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.591501 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.591549 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.591559 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.591575 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.591591 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.601427 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.601797 5004 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.601969 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs podName:89b69152-f317-4e7b-9215-fc6c71abc31f nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.601933882 +0000 UTC m=+116.250842190 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs") pod "network-metrics-daemon-7wmb8" (UID: "89b69152-f317-4e7b-9215-fc6c71abc31f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.693834 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.693894 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.693906 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.693925 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.693938 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.711193 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.711308 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.712437 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.712567 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.712646 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.712705 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.712766 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:46 crc kubenswrapper[5004]: E1208 18:52:46.712826 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.796536 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.796591 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.796602 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.796616 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.796627 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.902695 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.902750 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.902762 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.902780 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:46 crc kubenswrapper[5004]: I1208 18:52:46.902792 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:46Z","lastTransitionTime":"2025-12-08T18:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.005600 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.006158 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.006172 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.006210 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.006226 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.109910 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.109955 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.109968 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.109986 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.110000 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.178643 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerStarted","Data":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.179268 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.179291 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.179300 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.180794 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxdkt" event={"ID":"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5","Type":"ContainerStarted","Data":"6002385cc01ae78d4d79b236983c1c75f317e016a323301fbef2d9d8c68325a6"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.183210 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-67htd" event={"ID":"4b57acd8-c7ba-499a-8742-2a6fb585c7de","Type":"ContainerStarted","Data":"b99c0adf372d7318653d3f2d9d18f378ec4f899fe4a2a04803d9db2539051080"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.192160 5004 generic.go:358] "Generic (PLEG): container finished" podID="5285d47c-a794-4eb8-a948-e1f8a9e64ec8" containerID="78d5fd4f64fc45b5da9d2896917a2c090a35bdb660f9821fc90f2a7aad17fd08" exitCode=0 Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.192417 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" event={"ID":"5285d47c-a794-4eb8-a948-e1f8a9e64ec8","Type":"ContainerDied","Data":"78d5fd4f64fc45b5da9d2896917a2c090a35bdb660f9821fc90f2a7aad17fd08"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.209251 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.211583 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.212304 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.212357 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.212370 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.212390 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.212431 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.236413 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podStartSLOduration=80.236386035 podStartE2EDuration="1m20.236386035s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:47.208326414 +0000 UTC m=+100.857234732" watchObservedRunningTime="2025-12-08 18:52:47.236386035 +0000 UTC m=+100.885294333" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.236732 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qxdkt" podStartSLOduration=80.236727716 podStartE2EDuration="1m20.236727716s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:47.231358764 +0000 UTC m=+100.880267092" watchObservedRunningTime="2025-12-08 18:52:47.236727716 +0000 UTC m=+100.885636024" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.273052 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-67htd" podStartSLOduration=79.273029022 podStartE2EDuration="1m19.273029022s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:47.272499224 +0000 UTC m=+100.921407532" watchObservedRunningTime="2025-12-08 18:52:47.273029022 +0000 UTC m=+100.921937340" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.314225 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.314277 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.314291 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.314307 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.314319 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.416862 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.416910 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.416921 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.416938 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.416951 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.522352 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.522420 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.522441 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.522467 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.522485 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.625456 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.625811 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.625828 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.625881 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.625900 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.728006 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.728056 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.728112 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.728136 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.728154 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.830259 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.830314 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.830335 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.830357 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.830372 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.932798 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.932859 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.932869 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.932887 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:47 crc kubenswrapper[5004]: I1208 18:52:47.932898 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:47Z","lastTransitionTime":"2025-12-08T18:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.035751 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.036132 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.036143 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.036160 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.036174 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.139237 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.139282 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.139292 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.139315 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.139325 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.198512 5004 generic.go:358] "Generic (PLEG): container finished" podID="5285d47c-a794-4eb8-a948-e1f8a9e64ec8" containerID="ab9876c51072afbe19ba4fa37fe69593bf24681a2f00e084ba2e91e5a397ef64" exitCode=0 Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.199767 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" event={"ID":"5285d47c-a794-4eb8-a948-e1f8a9e64ec8","Type":"ContainerDied","Data":"ab9876c51072afbe19ba4fa37fe69593bf24681a2f00e084ba2e91e5a397ef64"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.248220 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.248275 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.248292 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.248312 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.248327 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.350602 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.350654 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.350665 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.350684 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.350698 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.452872 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.452912 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.452925 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.452941 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.452978 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.556501 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.556556 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.556567 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.556586 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.556598 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.659544 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.659595 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.659606 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.659620 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.659630 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.709942 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.709942 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:48 crc kubenswrapper[5004]: E1208 18:52:48.710118 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.710128 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:48 crc kubenswrapper[5004]: E1208 18:52:48.710196 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:48 crc kubenswrapper[5004]: E1208 18:52:48.710281 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.710355 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:48 crc kubenswrapper[5004]: E1208 18:52:48.710502 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.761374 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.761445 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.761457 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.761471 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.761480 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.863433 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.863493 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.863506 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.863522 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.863533 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.966735 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.966820 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.966837 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.966863 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:48 crc kubenswrapper[5004]: I1208 18:52:48.966876 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:48Z","lastTransitionTime":"2025-12-08T18:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.069182 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.069244 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.069255 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.069276 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.069289 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.172758 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.172804 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.172814 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.172829 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.172840 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.205511 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" event={"ID":"5285d47c-a794-4eb8-a948-e1f8a9e64ec8","Type":"ContainerStarted","Data":"31fe157a62eb5ab45a94640ddf225c5e7c302725819528a791bc1f055cf0ccad"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.275194 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.275238 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.275250 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.275266 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.275276 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.376962 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.376996 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.377004 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.377019 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.377030 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.479241 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.479299 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.479311 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.479327 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.479345 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.581666 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.581707 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.581717 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.581735 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.581745 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.689325 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.689370 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.689388 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.689407 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.689419 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.709513 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-q4dd6" podStartSLOduration=82.709495629 podStartE2EDuration="1m22.709495629s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:49.229725653 +0000 UTC m=+102.878633961" watchObservedRunningTime="2025-12-08 18:52:49.709495629 +0000 UTC m=+103.358403937" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.709834 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7wmb8"] Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.709934 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:49 crc kubenswrapper[5004]: E1208 18:52:49.710041 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.791842 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.791886 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.791899 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.791916 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.791928 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.897658 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.897709 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.897719 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.897732 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.897742 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.999561 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.999594 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.999607 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.999619 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[5004]: I1208 18:52:49.999628 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.101737 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.101802 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.101824 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.101845 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.101861 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.204320 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.204355 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.204365 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.204378 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.204388 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.317304 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.317347 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.317357 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.317370 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.317379 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.419484 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.419529 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.419540 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.419554 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.419566 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.521014 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.521060 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.521086 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.521103 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.521115 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.623299 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.623337 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.623349 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.623364 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.623376 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.709810 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:50 crc kubenswrapper[5004]: E1208 18:52:50.709936 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.710197 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:50 crc kubenswrapper[5004]: E1208 18:52:50.710292 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.710325 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:50 crc kubenswrapper[5004]: E1208 18:52:50.710390 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.725140 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.725181 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.725199 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.725215 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.725228 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.827207 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.827247 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.827256 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.827273 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.827282 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.929091 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.929159 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.929173 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.929190 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:50 crc kubenswrapper[5004]: I1208 18:52:50.929200 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:50Z","lastTransitionTime":"2025-12-08T18:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.032006 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.032066 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.032121 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.032156 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.032176 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.134524 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.134588 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.134647 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.134679 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.134703 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.237981 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.238030 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.238041 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.238057 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.238085 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.340412 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.340484 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.340497 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.340515 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.340539 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.442928 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.442980 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.442990 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.443006 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.443017 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.544974 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.545049 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.545067 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.545136 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.545149 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.647430 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.647954 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.648093 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.648193 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.648286 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.709771 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:51 crc kubenswrapper[5004]: E1208 18:52:51.710051 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.750499 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.750557 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.750569 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.750588 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.750601 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.852598 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.852677 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.852693 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.852718 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.852734 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.954788 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.955175 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.955366 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.955553 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:51 crc kubenswrapper[5004]: I1208 18:52:51.955690 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:51Z","lastTransitionTime":"2025-12-08T18:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.057976 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.058139 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.058154 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.058171 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.058182 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.160546 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.160783 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.160871 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.160958 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.161095 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.263656 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.264145 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.264339 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.264534 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.264736 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.366746 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.366789 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.366799 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.366813 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.366822 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.469112 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.469349 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.469413 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.469478 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.469546 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.571244 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.571305 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.571316 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.571331 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.571340 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.673435 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.673486 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.673498 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.673513 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.673526 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.709710 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.709820 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.709865 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:52 crc kubenswrapper[5004]: E1208 18:52:52.709966 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:52:52 crc kubenswrapper[5004]: E1208 18:52:52.710331 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:52:52 crc kubenswrapper[5004]: E1208 18:52:52.710387 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.775354 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.775406 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.775421 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.775437 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.775450 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.877326 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.877365 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.877376 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.877388 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.877396 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.979270 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.979313 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.979324 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.979341 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:52 crc kubenswrapper[5004]: I1208 18:52:52.979352 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:52Z","lastTransitionTime":"2025-12-08T18:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.014495 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.014554 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.014571 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.014593 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.014604 5004 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:53Z","lastTransitionTime":"2025-12-08T18:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.071280 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885"] Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.176894 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.179383 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.179384 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.179519 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.180020 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.278190 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.278236 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.278268 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.278287 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.278307 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.379183 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.379493 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.379608 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.379308 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.379657 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.379889 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.380015 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.381018 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.387252 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.399367 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bfde707-3e0d-48b2-8bbd-b8635cf08c04-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-gt885\" (UID: \"3bfde707-3e0d-48b2-8bbd-b8635cf08c04\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.490957 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.647242 5004 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.654400 5004 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 18:52:53 crc kubenswrapper[5004]: I1208 18:52:53.709691 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:53 crc kubenswrapper[5004]: E1208 18:52:53.709876 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wmb8" podUID="89b69152-f317-4e7b-9215-fc6c71abc31f" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.207962 5004 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.208548 5004 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.225877 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" event={"ID":"3bfde707-3e0d-48b2-8bbd-b8635cf08c04","Type":"ContainerStarted","Data":"0c35dba6aab6ef38abced04bd57d889a56655c7a77c67980944f6c6fab924504"} Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.225945 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" event={"ID":"3bfde707-3e0d-48b2-8bbd-b8635cf08c04","Type":"ContainerStarted","Data":"6be66af4b7707920b11862ace1998f1a793c3dc41a99159ea9df67a6376fc363"} Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.246692 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mf2f2"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.249334 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.249524 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.252481 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.252885 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.256330 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.268451 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.269131 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.271082 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r4pkx"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.276396 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.280818 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.281454 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.281828 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.282774 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.282940 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.283088 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.283592 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.283722 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.283809 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.283903 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.284089 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.284167 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.284796 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.291826 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293678 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c90aec7c-545e-4901-836e-96f7dbc5fac5-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293737 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvr88\" (UniqueName: \"kubernetes.io/projected/c90aec7c-545e-4901-836e-96f7dbc5fac5-kube-api-access-bvr88\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293771 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-client-ca\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293827 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fldkd\" (UniqueName: \"kubernetes.io/projected/6455354b-74ef-4e73-9a43-c7fad7edcf61-kube-api-access-fldkd\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293859 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c90aec7c-545e-4901-836e-96f7dbc5fac5-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293891 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6455354b-74ef-4e73-9a43-c7fad7edcf61-tmp\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293932 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6455354b-74ef-4e73-9a43-c7fad7edcf61-serving-cert\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293957 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvncc\" (UniqueName: \"kubernetes.io/projected/01577483-8802-409f-8495-9a15d7a1b855-kube-api-access-dvncc\") pod \"cluster-samples-operator-6b564684c8-wdrrr\" (UID: \"01577483-8802-409f-8495-9a15d7a1b855\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.293982 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01577483-8802-409f-8495-9a15d7a1b855-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wdrrr\" (UID: \"01577483-8802-409f-8495-9a15d7a1b855\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.294006 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.294044 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-config\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.294092 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c90aec7c-545e-4901-836e-96f7dbc5fac5-config\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.296729 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.299427 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-bxkfp"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.300126 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.302469 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-zvml8"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.303193 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.305159 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-7q525"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.305848 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.307799 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.308578 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.310310 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gt885" podStartSLOduration=87.310297164 podStartE2EDuration="1m27.310297164s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:54.248402707 +0000 UTC m=+107.897311005" watchObservedRunningTime="2025-12-08 18:52:54.310297164 +0000 UTC m=+107.959205482" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.310811 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-xlcnv"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.311511 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.314444 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-nx2nz"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.314626 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.317508 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-pxbdc"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.318471 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.321501 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-h7zw2"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.322329 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.325969 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wqg6t"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.327185 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.328481 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.329965 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.330268 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.333194 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-t7lx4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.333749 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.340180 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.341261 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.341512 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.345374 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.341266 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.347326 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.347500 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.349014 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.365017 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.365313 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.366624 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.373461 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.373489 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.373781 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.373947 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.374548 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.374783 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.375768 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.375818 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.377537 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.386826 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.387472 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.387685 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.390135 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.390181 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.390369 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.394650 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.397765 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4l7n9"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.399678 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bvr88\" (UniqueName: \"kubernetes.io/projected/c90aec7c-545e-4901-836e-96f7dbc5fac5-kube-api-access-bvr88\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.399919 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx2ph\" (UniqueName: \"kubernetes.io/projected/f9a06cf3-6092-4304-8ce9-f26d5b97e496-kube-api-access-tx2ph\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.399955 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.399976 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/974ef9b5-cdf4-470e-8df3-f132304df404-audit-dir\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.399995 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z656s\" (UniqueName: \"kubernetes.io/projected/9330ad80-6fc1-4c95-836e-7a077d18aeb9-kube-api-access-z656s\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400010 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-policies\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400027 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400043 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp8hj\" (UniqueName: \"kubernetes.io/projected/974ef9b5-cdf4-470e-8df3-f132304df404-kube-api-access-hp8hj\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400059 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc8857ac-4a60-413b-beab-3bc1e52a9420-serving-cert\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400097 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-config\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400113 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3637044-6420-4219-967b-128dd2dcdfcd-trusted-ca\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400130 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-client-ca\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400165 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a06cf3-6092-4304-8ce9-f26d5b97e496-serving-cert\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400179 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-service-ca\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400194 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7czs7\" (UniqueName: \"kubernetes.io/projected/cc8857ac-4a60-413b-beab-3bc1e52a9420-kube-api-access-7czs7\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400209 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400283 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fldkd\" (UniqueName: \"kubernetes.io/projected/6455354b-74ef-4e73-9a43-c7fad7edcf61-kube-api-access-fldkd\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400402 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400421 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3637044-6420-4219-967b-128dd2dcdfcd-config\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400437 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-config\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400460 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9330ad80-6fc1-4c95-836e-7a077d18aeb9-metrics-tls\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400474 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-client\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400494 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cc8857ac-4a60-413b-beab-3bc1e52a9420-tmp-dir\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400519 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c90aec7c-545e-4901-836e-96f7dbc5fac5-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400734 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400957 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c90aec7c-545e-4901-836e-96f7dbc5fac5-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.400991 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401022 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6455354b-74ef-4e73-9a43-c7fad7edcf61-tmp\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401121 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-client-ca\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401149 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401173 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401195 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401268 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401292 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-encryption-config\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401320 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-dir\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401336 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-etcd-client\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401351 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-image-import-ca\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401372 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-ca\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401374 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6455354b-74ef-4e73-9a43-c7fad7edcf61-tmp\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401387 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/974ef9b5-cdf4-470e-8df3-f132304df404-node-pullsecrets\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401414 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6455354b-74ef-4e73-9a43-c7fad7edcf61-serving-cert\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401433 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k56p\" (UniqueName: \"kubernetes.io/projected/c3637044-6420-4219-967b-128dd2dcdfcd-kube-api-access-2k56p\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401447 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-serving-cert\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401465 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dvncc\" (UniqueName: \"kubernetes.io/projected/01577483-8802-409f-8495-9a15d7a1b855-kube-api-access-dvncc\") pod \"cluster-samples-operator-6b564684c8-wdrrr\" (UID: \"01577483-8802-409f-8495-9a15d7a1b855\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401482 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01577483-8802-409f-8495-9a15d7a1b855-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wdrrr\" (UID: \"01577483-8802-409f-8495-9a15d7a1b855\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401497 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9330ad80-6fc1-4c95-836e-7a077d18aeb9-tmp-dir\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401512 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401532 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401545 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-config\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401766 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-config\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401787 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401802 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3637044-6420-4219-967b-128dd2dcdfcd-serving-cert\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401817 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-audit\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401834 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c90aec7c-545e-4901-836e-96f7dbc5fac5-config\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.401850 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwxv\" (UniqueName: \"kubernetes.io/projected/9296f49b-35cb-4c66-afc5-a62a45480f3a-kube-api-access-jpwxv\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.402633 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.402724 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzhw\" (UniqueName: \"kubernetes.io/projected/5ef4eb78-30f8-4a10-b956-a3ba6e587d53-kube-api-access-ttzhw\") pod \"downloads-747b44746d-bxkfp\" (UID: \"5ef4eb78-30f8-4a10-b956-a3ba6e587d53\") " pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.402743 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.402761 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.402790 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c90aec7c-545e-4901-836e-96f7dbc5fac5-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.402808 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.403012 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c90aec7c-545e-4901-836e-96f7dbc5fac5-config\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.403067 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.404201 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.404279 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-config\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.405148 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.412609 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.412791 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.419586 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.420174 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.420452 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.420592 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.420919 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.421302 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.422160 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.425998 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.426439 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.426618 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.426784 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.429550 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.429827 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.433868 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.435975 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436171 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436229 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436380 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436391 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436587 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436615 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436698 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436801 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436952 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.437118 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.437170 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.436896 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.437066 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.439000 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.439258 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.439541 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.439789 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.440065 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.440331 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.440570 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.440824 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.442793 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.443898 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.444829 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c90aec7c-545e-4901-836e-96f7dbc5fac5-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.446381 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.447254 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.447395 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.447614 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.450375 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.450806 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.453195 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.453388 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.453539 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.453711 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.453928 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.454154 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.454302 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.454941 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.460793 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-7q525"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.461086 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.464245 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.464479 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.481367 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.482103 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01577483-8802-409f-8495-9a15d7a1b855-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wdrrr\" (UID: \"01577483-8802-409f-8495-9a15d7a1b855\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.482362 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.482560 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.483016 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.483368 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.483887 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.484669 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.485024 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.485360 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.485606 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.486056 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.486332 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.486564 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.486779 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.487012 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.487937 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6455354b-74ef-4e73-9a43-c7fad7edcf61-serving-cert\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.489441 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.500488 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503559 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a06cf3-6092-4304-8ce9-f26d5b97e496-serving-cert\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503596 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-service-ca\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503618 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7czs7\" (UniqueName: \"kubernetes.io/projected/cc8857ac-4a60-413b-beab-3bc1e52a9420-kube-api-access-7czs7\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503640 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503668 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503687 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3637044-6420-4219-967b-128dd2dcdfcd-config\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503707 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-config\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503724 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9330ad80-6fc1-4c95-836e-7a077d18aeb9-metrics-tls\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503742 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-client\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503761 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cc8857ac-4a60-413b-beab-3bc1e52a9420-tmp-dir\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503783 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503801 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503823 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503844 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503863 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503883 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503900 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-encryption-config\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503919 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-dir\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503938 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-etcd-client\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503954 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-image-import-ca\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503973 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-ca\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.503992 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/974ef9b5-cdf4-470e-8df3-f132304df404-node-pullsecrets\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504008 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2k56p\" (UniqueName: \"kubernetes.io/projected/c3637044-6420-4219-967b-128dd2dcdfcd-kube-api-access-2k56p\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504027 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-serving-cert\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504050 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9330ad80-6fc1-4c95-836e-7a077d18aeb9-tmp-dir\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504080 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504103 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-config\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504142 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504166 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3637044-6420-4219-967b-128dd2dcdfcd-serving-cert\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504183 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-audit\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504203 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwxv\" (UniqueName: \"kubernetes.io/projected/9296f49b-35cb-4c66-afc5-a62a45480f3a-kube-api-access-jpwxv\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504238 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzhw\" (UniqueName: \"kubernetes.io/projected/5ef4eb78-30f8-4a10-b956-a3ba6e587d53-kube-api-access-ttzhw\") pod \"downloads-747b44746d-bxkfp\" (UID: \"5ef4eb78-30f8-4a10-b956-a3ba6e587d53\") " pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504257 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504274 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504299 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504325 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tx2ph\" (UniqueName: \"kubernetes.io/projected/f9a06cf3-6092-4304-8ce9-f26d5b97e496-kube-api-access-tx2ph\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504346 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504364 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/974ef9b5-cdf4-470e-8df3-f132304df404-audit-dir\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504387 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z656s\" (UniqueName: \"kubernetes.io/projected/9330ad80-6fc1-4c95-836e-7a077d18aeb9-kube-api-access-z656s\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504404 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-policies\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504425 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504445 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hp8hj\" (UniqueName: \"kubernetes.io/projected/974ef9b5-cdf4-470e-8df3-f132304df404-kube-api-access-hp8hj\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.504465 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc8857ac-4a60-413b-beab-3bc1e52a9420-serving-cert\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.505652 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.506266 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-config\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.506304 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3637044-6420-4219-967b-128dd2dcdfcd-trusted-ca\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.507878 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3637044-6420-4219-967b-128dd2dcdfcd-config\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.508329 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-config\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.530688 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.531087 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.531208 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.531270 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.533536 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3637044-6420-4219-967b-128dd2dcdfcd-trusted-ca\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.534022 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-config\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.536719 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.537016 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.537167 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.541087 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.546057 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.548750 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.552499 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.553061 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cc8857ac-4a60-413b-beab-3bc1e52a9420-tmp-dir\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.553739 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.555194 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9330ad80-6fc1-4c95-836e-7a077d18aeb9-metrics-tls\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.555296 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.555841 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-audit\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.555933 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.556063 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-policies\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.556138 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/974ef9b5-cdf4-470e-8df3-f132304df404-audit-dir\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.556141 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.556461 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.557226 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a06cf3-6092-4304-8ce9-f26d5b97e496-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.557763 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.557846 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a06cf3-6092-4304-8ce9-f26d5b97e496-serving-cert\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.558244 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.558521 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.559420 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9330ad80-6fc1-4c95-836e-7a077d18aeb9-tmp-dir\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.559490 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/974ef9b5-cdf4-470e-8df3-f132304df404-node-pullsecrets\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.560573 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-dir\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.561678 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3637044-6420-4219-967b-128dd2dcdfcd-serving-cert\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.561773 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-image-import-ca\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.567749 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z7q5s"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.570471 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-encryption-config\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.570518 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.570636 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974ef9b5-cdf4-470e-8df3-f132304df404-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.572255 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.572595 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-etcd-client\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.572971 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974ef9b5-cdf4-470e-8df3-f132304df404-serving-cert\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.573301 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.578398 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.579110 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.579617 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.581864 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.595281 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.596030 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.596066 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.596215 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.596493 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.610355 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.610966 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.613504 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-ca\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.615300 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.617006 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.620979 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.625892 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.628835 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.629855 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-service-ca\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.632113 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.646473 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.646669 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.652678 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mf2f2"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.652719 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.653308 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.654786 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.667213 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc8857ac-4a60-413b-beab-3bc1e52a9420-serving-cert\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.669100 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.670568 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-pxbdc"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.670603 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.674634 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-zvml8"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.674670 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.674763 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.674860 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.680615 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc8857ac-4a60-413b-beab-3bc1e52a9420-etcd-client\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.681835 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r4pkx"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.681905 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-n96v4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.688644 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.689061 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.689321 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.689625 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.694060 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc8857ac-4a60-413b-beab-3bc1e52a9420-config\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.694449 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-nx2nz"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.694511 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.694533 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-xlcnv"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.694548 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wqg6t"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.694568 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-bxkfp"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.694585 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.694606 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h287q"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.695047 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.700495 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8cfds"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.700911 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h287q" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.705752 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.705872 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.705927 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.705956 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.705988 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.706058 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.706085 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4l7n9"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.706097 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.706109 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-njvn7"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.710178 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkxw8"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.710531 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.712903 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z7q5s"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.712926 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.712975 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.712986 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tk26l"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.712909 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.712945 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.713125 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.714573 5004 scope.go:117] "RemoveContainer" containerID="43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.719103 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.719139 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.719177 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.719192 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.719344 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723892 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h287q"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723927 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723937 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723947 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723955 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-n96v4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723964 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-t7lx4"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723972 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8cfds"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723981 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.723990 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.725710 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.727025 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.729624 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.740250 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.747951 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tk26l"] Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.749181 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.769301 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.793226 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.816235 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.845866 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvr88\" (UniqueName: \"kubernetes.io/projected/c90aec7c-545e-4901-836e-96f7dbc5fac5-kube-api-access-bvr88\") pod \"openshift-controller-manager-operator-686468bdd5-4qzx9\" (UID: \"c90aec7c-545e-4901-836e-96f7dbc5fac5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.865476 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fldkd\" (UniqueName: \"kubernetes.io/projected/6455354b-74ef-4e73-9a43-c7fad7edcf61-kube-api-access-fldkd\") pod \"controller-manager-65b6cccf98-mf2f2\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.874931 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.884704 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvncc\" (UniqueName: \"kubernetes.io/projected/01577483-8802-409f-8495-9a15d7a1b855-kube-api-access-dvncc\") pod \"cluster-samples-operator-6b564684c8-wdrrr\" (UID: \"01577483-8802-409f-8495-9a15d7a1b855\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.889364 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.905820 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.909273 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.920039 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.928467 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.949063 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.969927 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 18:52:54 crc kubenswrapper[5004]: I1208 18:52:54.988640 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.008469 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.040517 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.054036 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.070577 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.092988 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.114133 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.132255 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.148464 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.174688 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.189133 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.208954 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9"] Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.211581 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.213272 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr"] Dec 08 18:52:55 crc kubenswrapper[5004]: W1208 18:52:55.217044 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90aec7c_545e_4901_836e_96f7dbc5fac5.slice/crio-797586c3fc194e9bea5b2636588c3a78fdfbb620838bb5eeb4d43e55accc39dc WatchSource:0}: Error finding container 797586c3fc194e9bea5b2636588c3a78fdfbb620838bb5eeb4d43e55accc39dc: Status 404 returned error can't find the container with id 797586c3fc194e9bea5b2636588c3a78fdfbb620838bb5eeb4d43e55accc39dc Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.229652 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.235315 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mf2f2"] Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.240595 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" event={"ID":"c90aec7c-545e-4901-836e-96f7dbc5fac5","Type":"ContainerStarted","Data":"797586c3fc194e9bea5b2636588c3a78fdfbb620838bb5eeb4d43e55accc39dc"} Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.244720 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.247033 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62"} Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.247903 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.250763 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: W1208 18:52:55.250844 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6455354b_74ef_4e73_9a43_c7fad7edcf61.slice/crio-905720e7b3fd4e442907ffa113f9e2ca42d46722156c58cda6a35b14d38cac15 WatchSource:0}: Error finding container 905720e7b3fd4e442907ffa113f9e2ca42d46722156c58cda6a35b14d38cac15: Status 404 returned error can't find the container with id 905720e7b3fd4e442907ffa113f9e2ca42d46722156c58cda6a35b14d38cac15 Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.288906 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.311751 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.329855 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.349442 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.368686 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.408250 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7czs7\" (UniqueName: \"kubernetes.io/projected/cc8857ac-4a60-413b-beab-3bc1e52a9420-kube-api-access-7czs7\") pod \"etcd-operator-69b85846b6-hn6pr\" (UID: \"cc8857ac-4a60-413b-beab-3bc1e52a9420\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.452655 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwxv\" (UniqueName: \"kubernetes.io/projected/9296f49b-35cb-4c66-afc5-a62a45480f3a-kube-api-access-jpwxv\") pod \"oauth-openshift-66458b6674-r4pkx\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.471868 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx2ph\" (UniqueName: \"kubernetes.io/projected/f9a06cf3-6092-4304-8ce9-f26d5b97e496-kube-api-access-tx2ph\") pod \"authentication-operator-7f5c659b84-jbshq\" (UID: \"f9a06cf3-6092-4304-8ce9-f26d5b97e496\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.483234 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp8hj\" (UniqueName: \"kubernetes.io/projected/974ef9b5-cdf4-470e-8df3-f132304df404-kube-api-access-hp8hj\") pod \"apiserver-9ddfb9f55-nx2nz\" (UID: \"974ef9b5-cdf4-470e-8df3-f132304df404\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.490681 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z656s\" (UniqueName: \"kubernetes.io/projected/9330ad80-6fc1-4c95-836e-7a077d18aeb9-kube-api-access-z656s\") pod \"dns-operator-799b87ffcd-7q525\" (UID: \"9330ad80-6fc1-4c95-836e-7a077d18aeb9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.510599 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzhw\" (UniqueName: \"kubernetes.io/projected/5ef4eb78-30f8-4a10-b956-a3ba6e587d53-kube-api-access-ttzhw\") pod \"downloads-747b44746d-bxkfp\" (UID: \"5ef4eb78-30f8-4a10-b956-a3ba6e587d53\") " pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.529286 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.532132 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.534926 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k56p\" (UniqueName: \"kubernetes.io/projected/c3637044-6420-4219-967b-128dd2dcdfcd-kube-api-access-2k56p\") pod \"console-operator-67c89758df-xlcnv\" (UID: \"c3637044-6420-4219-967b-128dd2dcdfcd\") " pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.549147 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.568638 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.570393 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.576490 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.587284 5004 request.go:752] "Waited before sending request" delay="1.015486248s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.590416 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.590933 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.600409 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.614293 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.634747 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.653944 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.681150 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.689860 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.710110 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.715382 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.716258 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.716925 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.746202 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.759364 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.769411 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.788690 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.821003 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.832685 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.850437 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.868922 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.888735 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.912375 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.930453 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.956543 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.968435 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 18:52:55 crc kubenswrapper[5004]: I1208 18:52:55.990633 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.009822 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.034561 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.051865 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.063966 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r4pkx"] Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.077284 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.095362 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.109633 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.132534 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.153496 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.160412 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-bxkfp"] Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.171512 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.175667 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq"] Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.189273 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.219992 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.232645 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.261408 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.264196 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-7q525"] Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.285638 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.291755 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.313367 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" event={"ID":"9296f49b-35cb-4c66-afc5-a62a45480f3a","Type":"ContainerStarted","Data":"cca8740312b05bd64958afbbc7849ce6e9c7be1e397fa5f69d2ceb669ebc41cc"} Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.325619 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.332780 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.339684 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" event={"ID":"6455354b-74ef-4e73-9a43-c7fad7edcf61","Type":"ContainerStarted","Data":"aaff36e0e11f2f014fd8a27464cb291bacd06401428bbf342241c2888e62b219"} Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.339796 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" event={"ID":"6455354b-74ef-4e73-9a43-c7fad7edcf61","Type":"ContainerStarted","Data":"905720e7b3fd4e442907ffa113f9e2ca42d46722156c58cda6a35b14d38cac15"} Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.347823 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-bxkfp" event={"ID":"5ef4eb78-30f8-4a10-b956-a3ba6e587d53","Type":"ContainerStarted","Data":"3b113b52990af7d0a40594e75eed2befba57dac1db007920d78cb5308b6e54f2"} Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.353156 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.355595 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-xlcnv"] Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.359495 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.373804 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.376888 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" event={"ID":"01577483-8802-409f-8495-9a15d7a1b855","Type":"ContainerStarted","Data":"c75fed00eea50e4cca526e842540834524d2e3fdd5ce877473fd37491e209d55"} Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.376931 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" event={"ID":"01577483-8802-409f-8495-9a15d7a1b855","Type":"ContainerStarted","Data":"9dd42e5aa79d00a73f1e582aa2db1f243c0f954051818164ef8ae0dcefbaf0d1"} Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.376941 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" event={"ID":"01577483-8802-409f-8495-9a15d7a1b855","Type":"ContainerStarted","Data":"40113d079434f2f6d067e24abfa63d8aceec482b52ca2354de29653eae3c8fbf"} Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.388214 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.393112 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" event={"ID":"c90aec7c-545e-4901-836e-96f7dbc5fac5","Type":"ContainerStarted","Data":"234245faaff1392fa0d2ba671f101bae0bb0025acc9805049d54cb718ceb87fb"} Dec 08 18:52:56 crc kubenswrapper[5004]: W1208 18:52:56.393704 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3637044_6420_4219_967b_128dd2dcdfcd.slice/crio-58b39ed55f149473834220928e82f9760871da3c3dcec83f6a9e0be2f958fc8b WatchSource:0}: Error finding container 58b39ed55f149473834220928e82f9760871da3c3dcec83f6a9e0be2f958fc8b: Status 404 returned error can't find the container with id 58b39ed55f149473834220928e82f9760871da3c3dcec83f6a9e0be2f958fc8b Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.396481 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" event={"ID":"f9a06cf3-6092-4304-8ce9-f26d5b97e496","Type":"ContainerStarted","Data":"8720e612829fbf1161a29db5b56404902143909ab0adb948f56ea4b7a3e63a34"} Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.432517 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.450370 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.460906 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-nx2nz"] Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.473433 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492169 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-audit-policies\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492228 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-encryption-config\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492266 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf0d9fe-459a-442c-b551-ba165104b4fd-serving-cert\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492326 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-config\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492361 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-serving-cert\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492388 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-config\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492413 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eab26793-a1ea-412a-8bb6-592aeabd824e-auth-proxy-config\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492438 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-etcd-client\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492471 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-trusted-ca\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492499 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrlqw\" (UniqueName: \"kubernetes.io/projected/5d3eaa17-c643-4536-88a0-a76854e545ab-kube-api-access-nrlqw\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492523 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-serving-cert\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492552 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-oauth-serving-cert\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492573 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vp7l\" (UniqueName: \"kubernetes.io/projected/39fd2fcf-66db-41da-bf3b-30d991d74c76-kube-api-access-5vp7l\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492643 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-tls\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492672 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-metrics-certs\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492699 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-trusted-ca-bundle\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492793 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492819 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1922ff11-ecff-4b61-841e-f6b9decee4fd-config\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492841 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-stats-auth\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492862 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-service-ca\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492890 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbhjj\" (UniqueName: \"kubernetes.io/projected/eab26793-a1ea-412a-8bb6-592aeabd824e-kube-api-access-zbhjj\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492917 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-client-ca\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492940 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d9fe-459a-442c-b551-ba165104b4fd-tmp\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492962 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwg7p\" (UniqueName: \"kubernetes.io/projected/bdf0d9fe-459a-442c-b551-ba165104b4fd-kube-api-access-kwg7p\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.492990 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-bound-sa-token\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493015 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5d3eaa17-c643-4536-88a0-a76854e545ab-available-featuregates\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493040 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-etcd-serving-ca\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493063 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fd2fcf-66db-41da-bf3b-30d991d74c76-audit-dir\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493109 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1922ff11-ecff-4b61-841e-f6b9decee4fd-images\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493133 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58pb4\" (UniqueName: \"kubernetes.io/projected/295410e0-8c26-494c-89b5-fee76ecf0ff4-kube-api-access-58pb4\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493158 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnzx\" (UniqueName: \"kubernetes.io/projected/1922ff11-ecff-4b61-841e-f6b9decee4fd-kube-api-access-rdnzx\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493184 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws6j5\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-kube-api-access-ws6j5\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493213 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1922ff11-ecff-4b61-841e-f6b9decee4fd-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493236 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-oauth-config\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493334 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vgtp\" (UniqueName: \"kubernetes.io/projected/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-kube-api-access-9vgtp\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493419 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-default-certificate\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493470 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-installation-pull-secrets\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493520 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-ca-trust-extracted\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493540 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-trusted-ca-bundle\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493574 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3eaa17-c643-4536-88a0-a76854e545ab-serving-cert\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493608 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-certificates\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493626 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab26793-a1ea-412a-8bb6-592aeabd824e-config\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493678 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eab26793-a1ea-412a-8bb6-592aeabd824e-machine-approver-tls\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.493699 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/295410e0-8c26-494c-89b5-fee76ecf0ff4-service-ca-bundle\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: E1208 18:52:56.493811 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:56.993789768 +0000 UTC m=+110.642698076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.505650 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.507050 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr"] Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.509513 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: W1208 18:52:56.526891 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod974ef9b5_cdf4_470e_8df3_f132304df404.slice/crio-49af7113f919ea866efd2bd0c53a2f74fa5f4c6c3bbe5fd890a05b1fb4021dd3 WatchSource:0}: Error finding container 49af7113f919ea866efd2bd0c53a2f74fa5f4c6c3bbe5fd890a05b1fb4021dd3: Status 404 returned error can't find the container with id 49af7113f919ea866efd2bd0c53a2f74fa5f4c6c3bbe5fd890a05b1fb4021dd3 Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.536630 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.554215 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 18:52:56 crc kubenswrapper[5004]: W1208 18:52:56.561822 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc8857ac_4a60_413b_beab_3bc1e52a9420.slice/crio-6a81dc77ad7598b4d60f8d3989034d688730657a989ed13d197ba8a66d632bf8 WatchSource:0}: Error finding container 6a81dc77ad7598b4d60f8d3989034d688730657a989ed13d197ba8a66d632bf8: Status 404 returned error can't find the container with id 6a81dc77ad7598b4d60f8d3989034d688730657a989ed13d197ba8a66d632bf8 Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.570805 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.589441 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.596901 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597289 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/762d046e-d753-4f82-afa3-90572628de64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597323 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4540c2c-5c03-438a-ae32-89509db54eeb-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597355 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-audit-policies\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597406 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eab26793-a1ea-412a-8bb6-592aeabd824e-machine-approver-tls\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597423 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zwb\" (UniqueName: \"kubernetes.io/projected/762d046e-d753-4f82-afa3-90572628de64-kube-api-access-29zwb\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597449 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/295410e0-8c26-494c-89b5-fee76ecf0ff4-service-ca-bundle\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597471 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/762d046e-d753-4f82-afa3-90572628de64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597498 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpkch\" (UniqueName: \"kubernetes.io/projected/a1aa164d-cf7a-4c71-90db-3488e29d60a2-kube-api-access-fpkch\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597520 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-node-bootstrap-token\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597538 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf0d9fe-459a-442c-b551-ba165104b4fd-serving-cert\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597557 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e59f94c1-696f-4a7d-9178-199ddda2363c-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597576 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d4540c2c-5c03-438a-ae32-89509db54eeb-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597592 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43fb53d-bb94-4fff-88db-a8cd4066d647-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597616 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-config\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597641 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4d5b812c-79db-4f9c-9102-a2c785563717-apiservice-cert\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597658 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b79abb7d-698b-41ba-95bf-59d9e718726a-profile-collector-cert\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597675 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8b85e4c-d122-457d-b192-1b58a5de2630-config\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597710 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/824dc6e4-c633-4036-b85f-ed97e63ec00e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597736 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-config\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597757 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d42d553c-cafa-471c-8df7-395b8463615d-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6p5ww\" (UID: \"d42d553c-cafa-471c-8df7-395b8463615d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597773 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-etcd-client\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597791 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-mountpoint-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597810 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8b85e4c-d122-457d-b192-1b58a5de2630-serving-cert\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597838 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdbbc49a-37c4-45b0-8130-07bc71523d83-config-volume\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597854 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597871 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597896 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5vp7l\" (UniqueName: \"kubernetes.io/projected/39fd2fcf-66db-41da-bf3b-30d991d74c76-kube-api-access-5vp7l\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597918 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-tmp-dir\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597948 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-metrics-certs\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597968 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/824dc6e4-c633-4036-b85f-ed97e63ec00e-images\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.597985 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/e43fb53d-bb94-4fff-88db-a8cd4066d647-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598030 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58pb4\" (UniqueName: \"kubernetes.io/projected/295410e0-8c26-494c-89b5-fee76ecf0ff4-kube-api-access-58pb4\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598107 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-client-ca\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598129 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vgtp\" (UniqueName: \"kubernetes.io/projected/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-kube-api-access-9vgtp\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598146 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-registration-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598162 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e59f94c1-696f-4a7d-9178-199ddda2363c-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598179 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4540c2c-5c03-438a-ae32-89509db54eeb-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598210 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5d3eaa17-c643-4536-88a0-a76854e545ab-available-featuregates\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598229 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e8b85e4c-d122-457d-b192-1b58a5de2630-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598251 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z778r\" (UniqueName: \"kubernetes.io/projected/765bbaba-9e29-4816-95f6-d2bc1a6fad23-kube-api-access-z778r\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598273 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e43fb53d-bb94-4fff-88db-a8cd4066d647-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598294 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598312 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1aa164d-cf7a-4c71-90db-3488e29d60a2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598332 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-stats-auth\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598351 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-config\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598368 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrdp\" (UniqueName: \"kubernetes.io/projected/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-kube-api-access-wlrdp\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598402 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx288\" (UniqueName: \"kubernetes.io/projected/58b8eee8-00f8-4078-a0d1-3805d336771f-kube-api-access-kx288\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598418 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4d5b812c-79db-4f9c-9102-a2c785563717-webhook-cert\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598437 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdnzx\" (UniqueName: \"kubernetes.io/projected/1922ff11-ecff-4b61-841e-f6b9decee4fd-kube-api-access-rdnzx\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598456 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdbbc49a-37c4-45b0-8130-07bc71523d83-secret-volume\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598481 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ws6j5\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-kube-api-access-ws6j5\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598505 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1922ff11-ecff-4b61-841e-f6b9decee4fd-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598526 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-oauth-config\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598549 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-default-certificate\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598570 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q82j\" (UniqueName: \"kubernetes.io/projected/a795185a-7be1-4ab8-ba7e-63a53ecc6225-kube-api-access-2q82j\") pod \"control-plane-machine-set-operator-75ffdb6fcd-88whn\" (UID: \"a795185a-7be1-4ab8-ba7e-63a53ecc6225\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598587 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-metrics-tls\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598630 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-trusted-ca-bundle\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598648 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/824dc6e4-c633-4036-b85f-ed97e63ec00e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598669 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-ca-trust-extracted\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598688 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4d5b812c-79db-4f9c-9102-a2c785563717-tmpfs\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598706 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-certificates\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598724 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfvn\" (UniqueName: \"kubernetes.io/projected/b79abb7d-698b-41ba-95bf-59d9e718726a-kube-api-access-thfvn\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598766 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1612b92e-7bbe-499e-8162-32d2de1e36ab-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.598790 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/35b855ab-7531-48c9-8924-9a291c0ae509-signing-cabundle\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.619251 5004 request.go:752] "Waited before sending request" delay="1.899393031s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-7dcws&limit=500&resourceVersion=0" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.631759 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.635575 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-audit-policies\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637269 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e43fb53d-bb94-4fff-88db-a8cd4066d647-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637354 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637401 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-config-volume\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637455 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-encryption-config\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637508 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhn6\" (UniqueName: \"kubernetes.io/projected/00acb591-ba56-4806-b41d-2efe11b0637d-kube-api-access-zzhn6\") pod \"multus-admission-controller-69db94689b-4l7n9\" (UID: \"00acb591-ba56-4806-b41d-2efe11b0637d\") " pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637570 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637589 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbqx\" (UniqueName: \"kubernetes.io/projected/4d5b812c-79db-4f9c-9102-a2c785563717-kube-api-access-vsbqx\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637641 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b79abb7d-698b-41ba-95bf-59d9e718726a-srv-cert\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637688 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-serving-cert\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637720 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-config\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637752 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eab26793-a1ea-412a-8bb6-592aeabd824e-auth-proxy-config\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637792 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7qs\" (UniqueName: \"kubernetes.io/projected/824dc6e4-c633-4036-b85f-ed97e63ec00e-kube-api-access-bg7qs\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637825 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9j7\" (UniqueName: \"kubernetes.io/projected/bae61685-0786-4beb-9e73-fb50660d59a6-kube-api-access-qh9j7\") pod \"ingress-canary-h287q\" (UID: \"bae61685-0786-4beb-9e73-fb50660d59a6\") " pod="openshift-ingress-canary/ingress-canary-h287q" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637870 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a795185a-7be1-4ab8-ba7e-63a53ecc6225-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-88whn\" (UID: \"a795185a-7be1-4ab8-ba7e-63a53ecc6225\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637913 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-csi-data-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637947 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pgn9\" (UniqueName: \"kubernetes.io/projected/d42d553c-cafa-471c-8df7-395b8463615d-kube-api-access-6pgn9\") pod \"package-server-manager-77f986bd66-6p5ww\" (UID: \"d42d553c-cafa-471c-8df7-395b8463615d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.637976 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7v84\" (UniqueName: \"kubernetes.io/projected/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-kube-api-access-l7v84\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638028 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-trusted-ca\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638065 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrlqw\" (UniqueName: \"kubernetes.io/projected/5d3eaa17-c643-4536-88a0-a76854e545ab-kube-api-access-nrlqw\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638122 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-serving-cert\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638184 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmht\" (UniqueName: \"kubernetes.io/projected/1612b92e-7bbe-499e-8162-32d2de1e36ab-kube-api-access-9hmht\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638217 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/35b855ab-7531-48c9-8924-9a291c0ae509-signing-key\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638253 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/765bbaba-9e29-4816-95f6-d2bc1a6fad23-tmpfs\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638314 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1612b92e-7bbe-499e-8162-32d2de1e36ab-config\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638342 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-serving-cert\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638370 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/765bbaba-9e29-4816-95f6-d2bc1a6fad23-srv-cert\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638434 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/765bbaba-9e29-4816-95f6-d2bc1a6fad23-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638473 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-oauth-serving-cert\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638524 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-tls\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638555 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-trusted-ca-bundle\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.638585 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00acb591-ba56-4806-b41d-2efe11b0637d-webhook-certs\") pod \"multus-admission-controller-69db94689b-4l7n9\" (UID: \"00acb591-ba56-4806-b41d-2efe11b0637d\") " pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639000 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bae61685-0786-4beb-9e73-fb50660d59a6-cert\") pod \"ingress-canary-h287q\" (UID: \"bae61685-0786-4beb-9e73-fb50660d59a6\") " pod="openshift-ingress-canary/ingress-canary-h287q" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639034 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-certs\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639065 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-socket-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639212 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-service-ca\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639243 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbec4b8-c62a-4f2a-8836-dc5571403963-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639427 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1922ff11-ecff-4b61-841e-f6b9decee4fd-config\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639464 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d9fe-459a-442c-b551-ba165104b4fd-tmp\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639507 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbhjj\" (UniqueName: \"kubernetes.io/projected/eab26793-a1ea-412a-8bb6-592aeabd824e-kube-api-access-zbhjj\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639539 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6fhm\" (UniqueName: \"kubernetes.io/projected/e59f94c1-696f-4a7d-9178-199ddda2363c-kube-api-access-n6fhm\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639571 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8b85e4c-d122-457d-b192-1b58a5de2630-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639611 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbec4b8-c62a-4f2a-8836-dc5571403963-config\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639689 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwg7p\" (UniqueName: \"kubernetes.io/projected/bdf0d9fe-459a-442c-b551-ba165104b4fd-kube-api-access-kwg7p\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639722 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2mx8\" (UniqueName: \"kubernetes.io/projected/35b855ab-7531-48c9-8924-9a291c0ae509-kube-api-access-b2mx8\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639753 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e43fb53d-bb94-4fff-88db-a8cd4066d647-tmp\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639808 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-bound-sa-token\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639843 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-etcd-serving-ca\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639878 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fd2fcf-66db-41da-bf3b-30d991d74c76-audit-dir\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639933 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1922ff11-ecff-4b61-841e-f6b9decee4fd-images\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.639963 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlf7v\" (UniqueName: \"kubernetes.io/projected/bcbec4b8-c62a-4f2a-8836-dc5571403963-kube-api-access-qlf7v\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640060 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1aa164d-cf7a-4c71-90db-3488e29d60a2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640243 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-plugins-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640285 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrhk\" (UniqueName: \"kubernetes.io/projected/721f448b-095b-4d7f-a367-512851e5c6d6-kube-api-access-jtrhk\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640348 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zvq\" (UniqueName: \"kubernetes.io/projected/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-kube-api-access-r5zvq\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640423 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58b8eee8-00f8-4078-a0d1-3805d336771f-tmp\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640453 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640493 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4540c2c-5c03-438a-ae32-89509db54eeb-config\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640537 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-installation-pull-secrets\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640576 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x895n\" (UniqueName: \"kubernetes.io/projected/fdbbc49a-37c4-45b0-8130-07bc71523d83-kube-api-access-x895n\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640619 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2m7\" (UniqueName: \"kubernetes.io/projected/29320623-cb93-488c-8bbf-4ac828a43a75-kube-api-access-gq2m7\") pod \"migrator-866fcbc849-jcs6x\" (UID: \"29320623-cb93-488c-8bbf-4ac828a43a75\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640656 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5248m\" (UniqueName: \"kubernetes.io/projected/e43fb53d-bb94-4fff-88db-a8cd4066d647-kube-api-access-5248m\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640685 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a1aa164d-cf7a-4c71-90db-3488e29d60a2-ready\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640742 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3eaa17-c643-4536-88a0-a76854e545ab-serving-cert\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.640796 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e59f94c1-696f-4a7d-9178-199ddda2363c-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.658313 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-serving-cert\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: E1208 18:52:56.660302 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.160256243 +0000 UTC m=+110.809164551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.663698 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-trusted-ca-bundle\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.666488 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-config\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.666650 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-etcd-serving-ca\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.670720 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eab26793-a1ea-412a-8bb6-592aeabd824e-auth-proxy-config\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.671775 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf0d9fe-459a-442c-b551-ba165104b4fd-serving-cert\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.671868 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-config\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.672844 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fd2fcf-66db-41da-bf3b-30d991d74c76-audit-dir\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.673742 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1922ff11-ecff-4b61-841e-f6b9decee4fd-images\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.674843 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b79abb7d-698b-41ba-95bf-59d9e718726a-tmpfs\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.674907 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab26793-a1ea-412a-8bb6-592aeabd824e-config\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.676057 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab26793-a1ea-412a-8bb6-592aeabd824e-config\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.680483 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5d3eaa17-c643-4536-88a0-a76854e545ab-available-featuregates\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.680905 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-trusted-ca\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.681801 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1922ff11-ecff-4b61-841e-f6b9decee4fd-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.688108 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-ca-trust-extracted\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.688984 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-certificates\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.689385 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-service-ca\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.689443 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fd2fcf-66db-41da-bf3b-30d991d74c76-trusted-ca-bundle\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.690127 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-oauth-serving-cert\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.699526 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eab26793-a1ea-412a-8bb6-592aeabd824e-machine-approver-tls\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.700113 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-default-certificate\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.700792 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-client-ca\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.701421 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-console-oauth-config\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.701884 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.702879 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1922ff11-ecff-4b61-841e-f6b9decee4fd-config\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.703346 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d9fe-459a-442c-b551-ba165104b4fd-tmp\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.705984 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/295410e0-8c26-494c-89b5-fee76ecf0ff4-service-ca-bundle\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.709580 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.713586 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3eaa17-c643-4536-88a0-a76854e545ab-serving-cert\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.721371 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-tls\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.725051 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-stats-auth\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.727292 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-installation-pull-secrets\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.727929 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-serving-cert\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.729534 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-encryption-config\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.733012 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fd2fcf-66db-41da-bf3b-30d991d74c76-etcd-client\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.740875 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/295410e0-8c26-494c-89b5-fee76ecf0ff4-metrics-certs\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.745146 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vgtp\" (UniqueName: \"kubernetes.io/projected/b2c5e9e8-9b38-40fe-89fa-34d128ee718c-kube-api-access-9vgtp\") pod \"console-64d44f6ddf-t7lx4\" (UID: \"b2c5e9e8-9b38-40fe-89fa-34d128ee718c\") " pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.751244 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbhjj\" (UniqueName: \"kubernetes.io/projected/eab26793-a1ea-412a-8bb6-592aeabd824e-kube-api-access-zbhjj\") pod \"machine-approver-54c688565-mjhc2\" (UID: \"eab26793-a1ea-412a-8bb6-592aeabd824e\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.759191 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778350 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlf7v\" (UniqueName: \"kubernetes.io/projected/bcbec4b8-c62a-4f2a-8836-dc5571403963-kube-api-access-qlf7v\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778462 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1aa164d-cf7a-4c71-90db-3488e29d60a2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778487 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-plugins-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778506 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrhk\" (UniqueName: \"kubernetes.io/projected/721f448b-095b-4d7f-a367-512851e5c6d6-kube-api-access-jtrhk\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778530 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zvq\" (UniqueName: \"kubernetes.io/projected/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-kube-api-access-r5zvq\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778556 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58b8eee8-00f8-4078-a0d1-3805d336771f-tmp\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778578 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778601 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4540c2c-5c03-438a-ae32-89509db54eeb-config\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778628 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x895n\" (UniqueName: \"kubernetes.io/projected/fdbbc49a-37c4-45b0-8130-07bc71523d83-kube-api-access-x895n\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778652 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2m7\" (UniqueName: \"kubernetes.io/projected/29320623-cb93-488c-8bbf-4ac828a43a75-kube-api-access-gq2m7\") pod \"migrator-866fcbc849-jcs6x\" (UID: \"29320623-cb93-488c-8bbf-4ac828a43a75\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778676 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5248m\" (UniqueName: \"kubernetes.io/projected/e43fb53d-bb94-4fff-88db-a8cd4066d647-kube-api-access-5248m\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778697 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a1aa164d-cf7a-4c71-90db-3488e29d60a2-ready\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778724 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e59f94c1-696f-4a7d-9178-199ddda2363c-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778746 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b79abb7d-698b-41ba-95bf-59d9e718726a-tmpfs\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778770 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/762d046e-d753-4f82-afa3-90572628de64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778821 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4540c2c-5c03-438a-ae32-89509db54eeb-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778857 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-29zwb\" (UniqueName: \"kubernetes.io/projected/762d046e-d753-4f82-afa3-90572628de64-kube-api-access-29zwb\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778886 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/762d046e-d753-4f82-afa3-90572628de64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778912 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fpkch\" (UniqueName: \"kubernetes.io/projected/a1aa164d-cf7a-4c71-90db-3488e29d60a2-kube-api-access-fpkch\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778935 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-node-bootstrap-token\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778957 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e59f94c1-696f-4a7d-9178-199ddda2363c-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.778983 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d4540c2c-5c03-438a-ae32-89509db54eeb-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779009 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43fb53d-bb94-4fff-88db-a8cd4066d647-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779035 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4d5b812c-79db-4f9c-9102-a2c785563717-apiservice-cert\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779055 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b79abb7d-698b-41ba-95bf-59d9e718726a-profile-collector-cert\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779112 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8b85e4c-d122-457d-b192-1b58a5de2630-config\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779143 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/824dc6e4-c633-4036-b85f-ed97e63ec00e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779200 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-config\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779224 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d42d553c-cafa-471c-8df7-395b8463615d-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6p5ww\" (UID: \"d42d553c-cafa-471c-8df7-395b8463615d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779250 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-mountpoint-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779273 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8b85e4c-d122-457d-b192-1b58a5de2630-serving-cert\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779303 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdbbc49a-37c4-45b0-8130-07bc71523d83-config-volume\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779328 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779351 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779382 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-tmp-dir\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779407 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/824dc6e4-c633-4036-b85f-ed97e63ec00e-images\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779431 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/e43fb53d-bb94-4fff-88db-a8cd4066d647-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779489 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-registration-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779513 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e59f94c1-696f-4a7d-9178-199ddda2363c-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779545 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4540c2c-5c03-438a-ae32-89509db54eeb-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779576 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e8b85e4c-d122-457d-b192-1b58a5de2630-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779601 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z778r\" (UniqueName: \"kubernetes.io/projected/765bbaba-9e29-4816-95f6-d2bc1a6fad23-kube-api-access-z778r\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779635 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e43fb53d-bb94-4fff-88db-a8cd4066d647-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779663 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779684 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1aa164d-cf7a-4c71-90db-3488e29d60a2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779723 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779748 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-config\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779769 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wlrdp\" (UniqueName: \"kubernetes.io/projected/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-kube-api-access-wlrdp\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779799 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kx288\" (UniqueName: \"kubernetes.io/projected/58b8eee8-00f8-4078-a0d1-3805d336771f-kube-api-access-kx288\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779820 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4d5b812c-79db-4f9c-9102-a2c785563717-webhook-cert\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779842 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdbbc49a-37c4-45b0-8130-07bc71523d83-secret-volume\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779872 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2q82j\" (UniqueName: \"kubernetes.io/projected/a795185a-7be1-4ab8-ba7e-63a53ecc6225-kube-api-access-2q82j\") pod \"control-plane-machine-set-operator-75ffdb6fcd-88whn\" (UID: \"a795185a-7be1-4ab8-ba7e-63a53ecc6225\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779892 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-metrics-tls\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779922 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/824dc6e4-c633-4036-b85f-ed97e63ec00e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779953 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4d5b812c-79db-4f9c-9102-a2c785563717-tmpfs\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.779975 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-thfvn\" (UniqueName: \"kubernetes.io/projected/b79abb7d-698b-41ba-95bf-59d9e718726a-kube-api-access-thfvn\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780027 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1612b92e-7bbe-499e-8162-32d2de1e36ab-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780053 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/35b855ab-7531-48c9-8924-9a291c0ae509-signing-cabundle\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780090 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e43fb53d-bb94-4fff-88db-a8cd4066d647-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780115 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780155 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-config-volume\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780204 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhn6\" (UniqueName: \"kubernetes.io/projected/00acb591-ba56-4806-b41d-2efe11b0637d-kube-api-access-zzhn6\") pod \"multus-admission-controller-69db94689b-4l7n9\" (UID: \"00acb591-ba56-4806-b41d-2efe11b0637d\") " pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780249 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbqx\" (UniqueName: \"kubernetes.io/projected/4d5b812c-79db-4f9c-9102-a2c785563717-kube-api-access-vsbqx\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780271 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b79abb7d-698b-41ba-95bf-59d9e718726a-srv-cert\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780328 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bg7qs\" (UniqueName: \"kubernetes.io/projected/824dc6e4-c633-4036-b85f-ed97e63ec00e-kube-api-access-bg7qs\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780350 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qh9j7\" (UniqueName: \"kubernetes.io/projected/bae61685-0786-4beb-9e73-fb50660d59a6-kube-api-access-qh9j7\") pod \"ingress-canary-h287q\" (UID: \"bae61685-0786-4beb-9e73-fb50660d59a6\") " pod="openshift-ingress-canary/ingress-canary-h287q" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780378 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a795185a-7be1-4ab8-ba7e-63a53ecc6225-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-88whn\" (UID: \"a795185a-7be1-4ab8-ba7e-63a53ecc6225\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780408 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-csi-data-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780429 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6pgn9\" (UniqueName: \"kubernetes.io/projected/d42d553c-cafa-471c-8df7-395b8463615d-kube-api-access-6pgn9\") pod \"package-server-manager-77f986bd66-6p5ww\" (UID: \"d42d553c-cafa-471c-8df7-395b8463615d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780451 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7v84\" (UniqueName: \"kubernetes.io/projected/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-kube-api-access-l7v84\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780508 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmht\" (UniqueName: \"kubernetes.io/projected/1612b92e-7bbe-499e-8162-32d2de1e36ab-kube-api-access-9hmht\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.780530 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/35b855ab-7531-48c9-8924-9a291c0ae509-signing-key\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785007 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/765bbaba-9e29-4816-95f6-d2bc1a6fad23-tmpfs\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785097 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1612b92e-7bbe-499e-8162-32d2de1e36ab-config\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785130 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-serving-cert\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785155 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/765bbaba-9e29-4816-95f6-d2bc1a6fad23-srv-cert\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785190 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/765bbaba-9e29-4816-95f6-d2bc1a6fad23-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785257 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00acb591-ba56-4806-b41d-2efe11b0637d-webhook-certs\") pod \"multus-admission-controller-69db94689b-4l7n9\" (UID: \"00acb591-ba56-4806-b41d-2efe11b0637d\") " pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785294 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bae61685-0786-4beb-9e73-fb50660d59a6-cert\") pod \"ingress-canary-h287q\" (UID: \"bae61685-0786-4beb-9e73-fb50660d59a6\") " pod="openshift-ingress-canary/ingress-canary-h287q" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785319 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-certs\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785449 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-socket-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785595 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbec4b8-c62a-4f2a-8836-dc5571403963-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785874 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n6fhm\" (UniqueName: \"kubernetes.io/projected/e59f94c1-696f-4a7d-9178-199ddda2363c-kube-api-access-n6fhm\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785923 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8b85e4c-d122-457d-b192-1b58a5de2630-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.785960 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbec4b8-c62a-4f2a-8836-dc5571403963-config\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.786002 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2mx8\" (UniqueName: \"kubernetes.io/projected/35b855ab-7531-48c9-8924-9a291c0ae509-kube-api-access-b2mx8\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.786028 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e43fb53d-bb94-4fff-88db-a8cd4066d647-tmp\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.788272 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.789829 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e43fb53d-bb94-4fff-88db-a8cd4066d647-tmp\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.789925 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwg7p\" (UniqueName: \"kubernetes.io/projected/bdf0d9fe-459a-442c-b551-ba165104b4fd-kube-api-access-kwg7p\") pod \"route-controller-manager-776cdc94d6-bmpp4\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.790795 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e43fb53d-bb94-4fff-88db-a8cd4066d647-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.798845 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1aa164d-cf7a-4c71-90db-3488e29d60a2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: E1208 18:52:56.799237 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.299216755 +0000 UTC m=+110.948125063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.800133 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-config\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.802324 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-mountpoint-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.802962 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a1aa164d-cf7a-4c71-90db-3488e29d60a2-ready\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.803986 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdbbc49a-37c4-45b0-8130-07bc71523d83-config-volume\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.804563 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d4540c2c-5c03-438a-ae32-89509db54eeb-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.808677 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43fb53d-bb94-4fff-88db-a8cd4066d647-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.809285 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b79abb7d-698b-41ba-95bf-59d9e718726a-tmpfs\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.817053 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4d5b812c-79db-4f9c-9102-a2c785563717-apiservice-cert\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.818054 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbec4b8-c62a-4f2a-8836-dc5571403963-config\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.818448 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8b85e4c-d122-457d-b192-1b58a5de2630-config\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.818908 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e59f94c1-696f-4a7d-9178-199ddda2363c-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.819560 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-registration-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.823617 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-config\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.825758 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/762d046e-d753-4f82-afa3-90572628de64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.826417 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/762d046e-d753-4f82-afa3-90572628de64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.830431 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-metrics-tls\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.830848 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4d5b812c-79db-4f9c-9102-a2c785563717-tmpfs\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.831114 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e8b85e4c-d122-457d-b192-1b58a5de2630-tmp-dir\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.831935 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/824dc6e4-c633-4036-b85f-ed97e63ec00e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.834674 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdnzx\" (UniqueName: \"kubernetes.io/projected/1922ff11-ecff-4b61-841e-f6b9decee4fd-kube-api-access-rdnzx\") pod \"machine-api-operator-755bb95488-zvml8\" (UID: \"1922ff11-ecff-4b61-841e-f6b9decee4fd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.835605 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-tmp-dir\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.840342 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/824dc6e4-c633-4036-b85f-ed97e63ec00e-images\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.841826 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-node-bootstrap-token\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.842157 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/e43fb53d-bb94-4fff-88db-a8cd4066d647-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.842232 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4540c2c-5c03-438a-ae32-89509db54eeb-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.845397 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.851504 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-csi-data-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.852342 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/765bbaba-9e29-4816-95f6-d2bc1a6fad23-tmpfs\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.852612 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1aa164d-cf7a-4c71-90db-3488e29d60a2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.852772 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-socket-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.853300 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58b8eee8-00f8-4078-a0d1-3805d336771f-tmp\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.860638 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4d5b812c-79db-4f9c-9102-a2c785563717-webhook-cert\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.860919 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1612b92e-7bbe-499e-8162-32d2de1e36ab-config\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.861232 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-config-volume\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.861410 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/35b855ab-7531-48c9-8924-9a291c0ae509-signing-cabundle\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.861973 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.862141 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/721f448b-095b-4d7f-a367-512851e5c6d6-plugins-dir\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.862476 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws6j5\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-kube-api-access-ws6j5\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.864926 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4540c2c-5c03-438a-ae32-89509db54eeb-config\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.865737 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbec4b8-c62a-4f2a-8836-dc5571403963-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.868679 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.876933 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a795185a-7be1-4ab8-ba7e-63a53ecc6225-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-88whn\" (UID: \"a795185a-7be1-4ab8-ba7e-63a53ecc6225\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.877773 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d42d553c-cafa-471c-8df7-395b8463615d-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6p5ww\" (UID: \"d42d553c-cafa-471c-8df7-395b8463615d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.879293 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/765bbaba-9e29-4816-95f6-d2bc1a6fad23-srv-cert\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.881742 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/35b855ab-7531-48c9-8924-9a291c0ae509-signing-key\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.884193 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.891181 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:56 crc kubenswrapper[5004]: E1208 18:52:56.891568 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.391503478 +0000 UTC m=+111.040411786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.892180 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.892230 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-certs\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.892233 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-bound-sa-token\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.892335 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdbbc49a-37c4-45b0-8130-07bc71523d83-secret-volume\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.892640 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-serving-cert\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.892655 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8b85e4c-d122-457d-b192-1b58a5de2630-serving-cert\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.892770 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b79abb7d-698b-41ba-95bf-59d9e718726a-srv-cert\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: E1208 18:52:56.892838 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.392826221 +0000 UTC m=+111.041734719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.893248 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/824dc6e4-c633-4036-b85f-ed97e63ec00e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.893950 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1612b92e-7bbe-499e-8162-32d2de1e36ab-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.909651 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00acb591-ba56-4806-b41d-2efe11b0637d-webhook-certs\") pod \"multus-admission-controller-69db94689b-4l7n9\" (UID: \"00acb591-ba56-4806-b41d-2efe11b0637d\") " pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.913257 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bae61685-0786-4beb-9e73-fb50660d59a6-cert\") pod \"ingress-canary-h287q\" (UID: \"bae61685-0786-4beb-9e73-fb50660d59a6\") " pod="openshift-ingress-canary/ingress-canary-h287q" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.913745 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/765bbaba-9e29-4816-95f6-d2bc1a6fad23-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.917417 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e59f94c1-696f-4a7d-9178-199ddda2363c-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.920016 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58pb4\" (UniqueName: \"kubernetes.io/projected/295410e0-8c26-494c-89b5-fee76ecf0ff4-kube-api-access-58pb4\") pod \"router-default-68cf44c8b8-h7zw2\" (UID: \"295410e0-8c26-494c-89b5-fee76ecf0ff4\") " pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.920209 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b79abb7d-698b-41ba-95bf-59d9e718726a-profile-collector-cert\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.928550 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vp7l\" (UniqueName: \"kubernetes.io/projected/39fd2fcf-66db-41da-bf3b-30d991d74c76-kube-api-access-5vp7l\") pod \"apiserver-8596bd845d-dwxjt\" (UID: \"39fd2fcf-66db-41da-bf3b-30d991d74c76\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.932700 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrlqw\" (UniqueName: \"kubernetes.io/projected/5d3eaa17-c643-4536-88a0-a76854e545ab-kube-api-access-nrlqw\") pod \"openshift-config-operator-5777786469-wqg6t\" (UID: \"5d3eaa17-c643-4536-88a0-a76854e545ab\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.936248 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg7qs\" (UniqueName: \"kubernetes.io/projected/824dc6e4-c633-4036-b85f-ed97e63ec00e-kube-api-access-bg7qs\") pod \"machine-config-operator-67c9d58cbb-tgfjp\" (UID: \"824dc6e4-c633-4036-b85f-ed97e63ec00e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.957059 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlrdp\" (UniqueName: \"kubernetes.io/projected/e1f41e8f-783b-443b-b8a8-4bcd32c803c2-kube-api-access-wlrdp\") pod \"service-ca-operator-5b9c976747-nfdbk\" (UID: \"e1f41e8f-783b-443b-b8a8-4bcd32c803c2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.980616 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx288\" (UniqueName: \"kubernetes.io/projected/58b8eee8-00f8-4078-a0d1-3805d336771f-kube-api-access-kx288\") pod \"marketplace-operator-547dbd544d-z7q5s\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.991128 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2m7\" (UniqueName: \"kubernetes.io/projected/29320623-cb93-488c-8bbf-4ac828a43a75-kube-api-access-gq2m7\") pod \"migrator-866fcbc849-jcs6x\" (UID: \"29320623-cb93-488c-8bbf-4ac828a43a75\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.994174 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:56 crc kubenswrapper[5004]: E1208 18:52:56.994486 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.494456575 +0000 UTC m=+111.143364883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:56 crc kubenswrapper[5004]: I1208 18:52:56.994898 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:56 crc kubenswrapper[5004]: E1208 18:52:56.995325 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.495316472 +0000 UTC m=+111.144224780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.014242 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5248m\" (UniqueName: \"kubernetes.io/projected/e43fb53d-bb94-4fff-88db-a8cd4066d647-kube-api-access-5248m\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.024685 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.031337 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.036736 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e59f94c1-696f-4a7d-9178-199ddda2363c-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.040562 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.051329 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.054496 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2mx8\" (UniqueName: \"kubernetes.io/projected/35b855ab-7531-48c9-8924-9a291c0ae509-kube-api-access-b2mx8\") pod \"service-ca-74545575db-n96v4\" (UID: \"35b855ab-7531-48c9-8924-9a291c0ae509\") " pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.087448 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4540c2c-5c03-438a-ae32-89509db54eeb-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mchbg\" (UID: \"d4540c2c-5c03-438a-ae32-89509db54eeb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.087766 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.095839 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.096452 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.596432399 +0000 UTC m=+111.245340697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.096692 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-29zwb\" (UniqueName: \"kubernetes.io/projected/762d046e-d753-4f82-afa3-90572628de64-kube-api-access-29zwb\") pod \"machine-config-controller-f9cdd68f7-dns59\" (UID: \"762d046e-d753-4f82-afa3-90572628de64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.109395 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.119309 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpkch\" (UniqueName: \"kubernetes.io/projected/a1aa164d-cf7a-4c71-90db-3488e29d60a2-kube-api-access-fpkch\") pod \"cni-sysctl-allowlist-ds-pkxw8\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.129544 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.132687 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.147239 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z778r\" (UniqueName: \"kubernetes.io/projected/765bbaba-9e29-4816-95f6-d2bc1a6fad23-kube-api-access-z778r\") pod \"catalog-operator-75ff9f647d-jhvdw\" (UID: \"765bbaba-9e29-4816-95f6-d2bc1a6fad23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.156225 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.166895 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q82j\" (UniqueName: \"kubernetes.io/projected/a795185a-7be1-4ab8-ba7e-63a53ecc6225-kube-api-access-2q82j\") pod \"control-plane-machine-set-operator-75ffdb6fcd-88whn\" (UID: \"a795185a-7be1-4ab8-ba7e-63a53ecc6225\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.173212 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-thfvn\" (UniqueName: \"kubernetes.io/projected/b79abb7d-698b-41ba-95bf-59d9e718726a-kube-api-access-thfvn\") pod \"olm-operator-5cdf44d969-xw8q7\" (UID: \"b79abb7d-698b-41ba-95bf-59d9e718726a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.196553 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.199187 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.199558 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.69954078 +0000 UTC m=+111.348449088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.205821 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8b85e4c-d122-457d-b192-1b58a5de2630-kube-api-access\") pod \"kube-apiserver-operator-575994946d-sq7b5\" (UID: \"e8b85e4c-d122-457d-b192-1b58a5de2630\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.209246 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.234026 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.241313 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-zmngf\" (UID: \"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.243566 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-t7lx4"] Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.251111 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.252901 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6fhm\" (UniqueName: \"kubernetes.io/projected/e59f94c1-696f-4a7d-9178-199ddda2363c-kube-api-access-n6fhm\") pod \"ingress-operator-6b9cb4dbcf-c2wzq\" (UID: \"e59f94c1-696f-4a7d-9178-199ddda2363c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.261730 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.274679 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhn6\" (UniqueName: \"kubernetes.io/projected/00acb591-ba56-4806-b41d-2efe11b0637d-kube-api-access-zzhn6\") pod \"multus-admission-controller-69db94689b-4l7n9\" (UID: \"00acb591-ba56-4806-b41d-2efe11b0637d\") " pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.304705 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.305481 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.8054088 +0000 UTC m=+111.454317118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.305715 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.316511 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbqx\" (UniqueName: \"kubernetes.io/projected/4d5b812c-79db-4f9c-9102-a2c785563717-kube-api-access-vsbqx\") pod \"packageserver-7d4fc7d867-chbws\" (UID: \"4d5b812c-79db-4f9c-9102-a2c785563717\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.322317 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-n96v4" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.322942 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh9j7\" (UniqueName: \"kubernetes.io/projected/bae61685-0786-4beb-9e73-fb50660d59a6-kube-api-access-qh9j7\") pod \"ingress-canary-h287q\" (UID: \"bae61685-0786-4beb-9e73-fb50660d59a6\") " pod="openshift-ingress-canary/ingress-canary-h287q" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.330823 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.341816 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e43fb53d-bb94-4fff-88db-a8cd4066d647-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-tnw6x\" (UID: \"e43fb53d-bb94-4fff-88db-a8cd4066d647\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.350129 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7v84\" (UniqueName: \"kubernetes.io/projected/bcfacaf1-601b-4cb6-9c0e-528f2e5d655c-kube-api-access-l7v84\") pod \"machine-config-server-njvn7\" (UID: \"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c\") " pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.361264 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pgn9\" (UniqueName: \"kubernetes.io/projected/d42d553c-cafa-471c-8df7-395b8463615d-kube-api-access-6pgn9\") pod \"package-server-manager-77f986bd66-6p5ww\" (UID: \"d42d553c-cafa-471c-8df7-395b8463615d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.361360 5004 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-mf2f2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.361455 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.375514 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h287q" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.376389 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmht\" (UniqueName: \"kubernetes.io/projected/1612b92e-7bbe-499e-8162-32d2de1e36ab-kube-api-access-9hmht\") pod \"openshift-apiserver-operator-846cbfc458-p8vc4\" (UID: \"1612b92e-7bbe-499e-8162-32d2de1e36ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.396642 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrhk\" (UniqueName: \"kubernetes.io/projected/721f448b-095b-4d7f-a367-512851e5c6d6-kube-api-access-jtrhk\") pod \"csi-hostpathplugin-tk26l\" (UID: \"721f448b-095b-4d7f-a367-512851e5c6d6\") " pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.401644 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.411291 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.411946 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:57.911930191 +0000 UTC m=+111.560838499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.415156 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njvn7" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.436816 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.438260 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.448571 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zvq\" (UniqueName: \"kubernetes.io/projected/5c99fc6d-0d93-47fd-87fd-9e80ada9319c-kube-api-access-r5zvq\") pod \"dns-default-8cfds\" (UID: \"5c99fc6d-0d93-47fd-87fd-9e80ada9319c\") " pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.466655 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.470250 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlf7v\" (UniqueName: \"kubernetes.io/projected/bcbec4b8-c62a-4f2a-8836-dc5571403963-kube-api-access-qlf7v\") pod \"kube-storage-version-migrator-operator-565b79b866-xsdsz\" (UID: \"bcbec4b8-c62a-4f2a-8836-dc5571403963\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.474440 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.480871 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" event={"ID":"9296f49b-35cb-4c66-afc5-a62a45480f3a","Type":"ContainerStarted","Data":"356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.483322 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.485273 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.485661 5004 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-r4pkx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.485713 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" podUID="9296f49b-35cb-4c66-afc5-a62a45480f3a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.488018 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" event={"ID":"cc8857ac-4a60-413b-beab-3bc1e52a9420","Type":"ContainerStarted","Data":"6a81dc77ad7598b4d60f8d3989034d688730657a989ed13d197ba8a66d632bf8"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.493174 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x895n\" (UniqueName: \"kubernetes.io/projected/fdbbc49a-37c4-45b0-8130-07bc71523d83-kube-api-access-x895n\") pod \"collect-profiles-29420325-tglp4\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.495406 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-bxkfp" event={"ID":"5ef4eb78-30f8-4a10-b956-a3ba6e587d53","Type":"ContainerStarted","Data":"310db58d9b79248fa3df1ac237a3c152f4d1126585f3ced2f62a67403543e248"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.496529 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.498067 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" event={"ID":"974ef9b5-cdf4-470e-8df3-f132304df404","Type":"ContainerStarted","Data":"49af7113f919ea866efd2bd0c53a2f74fa5f4c6c3bbe5fd890a05b1fb4021dd3"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.499616 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.499663 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.513828 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.514176 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.014159473 +0000 UTC m=+111.663067781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.520552 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" event={"ID":"f9a06cf3-6092-4304-8ce9-f26d5b97e496","Type":"ContainerStarted","Data":"e3982aed0a4210f76c63fd7ca948f4ed3e0e68b8975b89dfc5245dd7fdd93b6c"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.522477 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" event={"ID":"eab26793-a1ea-412a-8bb6-592aeabd824e","Type":"ContainerStarted","Data":"a510eea883e697de114ebfc9bc5c8b062060485dd5f2f2673a177cb41497461e"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.523333 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-t7lx4" event={"ID":"b2c5e9e8-9b38-40fe-89fa-34d128ee718c","Type":"ContainerStarted","Data":"d884e0babb94cdd562f8593044c9638ec3a6cdb84d6f2e69369e880ca997e944"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.532347 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" event={"ID":"c3637044-6420-4219-967b-128dd2dcdfcd","Type":"ContainerStarted","Data":"0ce2485f8c9a9577692e277358a0aa7f657a569ab1c6d9538d4e1377a8821eef"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.532403 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" event={"ID":"c3637044-6420-4219-967b-128dd2dcdfcd","Type":"ContainerStarted","Data":"58b39ed55f149473834220928e82f9760871da3c3dcec83f6a9e0be2f958fc8b"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.536190 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.536322 5004 patch_prober.go:28] interesting pod/console-operator-67c89758df-xlcnv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.536360 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" podUID="c3637044-6420-4219-967b-128dd2dcdfcd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.550358 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tk26l" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.573742 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.586467 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.611376 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.613726 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" event={"ID":"9330ad80-6fc1-4c95-836e-7a077d18aeb9","Type":"ContainerStarted","Data":"194a7995b9302a294455b1572f9cd5617270b7685fef077471da186d726883ff"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.613858 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" event={"ID":"9330ad80-6fc1-4c95-836e-7a077d18aeb9","Type":"ContainerStarted","Data":"7317d01f07b7af803f906b762106685032f51bc74bf1a0ff3d19b243d91789ab"} Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.636532 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.640867 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.140843121 +0000 UTC m=+111.789751429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.697451 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8cfds" Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.707713 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wqg6t"] Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.754595 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.764905 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.264865613 +0000 UTC m=+111.913773921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.859260 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.859848 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.359829233 +0000 UTC m=+112.008737541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.905554 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt"] Dec 08 18:52:57 crc kubenswrapper[5004]: I1208 18:52:57.963676 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:57 crc kubenswrapper[5004]: E1208 18:52:57.964780 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.464716701 +0000 UTC m=+112.113625009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.149857 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-n96v4"] Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.150497 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:58 crc kubenswrapper[5004]: E1208 18:52:58.151825 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.651799118 +0000 UTC m=+112.300707426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.156515 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59"] Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.159317 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x"] Dec 08 18:52:58 crc kubenswrapper[5004]: W1208 18:52:58.217616 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1aa164d_cf7a_4c71_90db_3488e29d60a2.slice/crio-17c94a33c2a825e5ad1ffee48b6350c8c8e5aad6dc1aa4a3596bd7e4960893ab WatchSource:0}: Error finding container 17c94a33c2a825e5ad1ffee48b6350c8c8e5aad6dc1aa4a3596bd7e4960893ab: Status 404 returned error can't find the container with id 17c94a33c2a825e5ad1ffee48b6350c8c8e5aad6dc1aa4a3596bd7e4960893ab Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.257489 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:58 crc kubenswrapper[5004]: E1208 18:52:58.257820 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.757801411 +0000 UTC m=+112.406709719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.295793 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.359367 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4"] Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.364504 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:58 crc kubenswrapper[5004]: E1208 18:52:58.364997 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.864973743 +0000 UTC m=+112.513882051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.470886 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:58 crc kubenswrapper[5004]: E1208 18:52:58.471437 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:58.971408841 +0000 UTC m=+112.620317139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.509856 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk"] Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.563278 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.56326062 podStartE2EDuration="28.56326062s" podCreationTimestamp="2025-12-08 18:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:58.5629512 +0000 UTC m=+112.211859528" watchObservedRunningTime="2025-12-08 18:52:58.56326062 +0000 UTC m=+112.212168928" Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.582093 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:58 crc kubenswrapper[5004]: E1208 18:52:58.582486 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.082472277 +0000 UTC m=+112.731380585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.664010 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" event={"ID":"eab26793-a1ea-412a-8bb6-592aeabd824e","Type":"ContainerStarted","Data":"32a84ab497f859a2b00fc5e37b8edd14c3c9b5ac59f3b784a1610a05a675d17a"} Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.671586 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" podStartSLOduration=91.671560588 podStartE2EDuration="1m31.671560588s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:58.669370747 +0000 UTC m=+112.318279055" watchObservedRunningTime="2025-12-08 18:52:58.671560588 +0000 UTC m=+112.320468896" Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.689750 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:58 crc kubenswrapper[5004]: E1208 18:52:58.690476 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.190449245 +0000 UTC m=+112.839357553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.758210 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" event={"ID":"9330ad80-6fc1-4c95-836e-7a077d18aeb9","Type":"ContainerStarted","Data":"20495e7f2cd8b4264c77bc41427ca0e1ea1934d03ecb9ac796345f50ee1e16c3"} Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.793894 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" podStartSLOduration=91.793848495 podStartE2EDuration="1m31.793848495s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:58.793233755 +0000 UTC m=+112.442142083" watchObservedRunningTime="2025-12-08 18:52:58.793848495 +0000 UTC m=+112.442756803" Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.796014 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:58 crc kubenswrapper[5004]: E1208 18:52:58.796551 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.296521031 +0000 UTC m=+112.945429349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.810128 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" event={"ID":"29320623-cb93-488c-8bbf-4ac828a43a75","Type":"ContainerStarted","Data":"450863cdc5fe1a58de0c0a0b30ed3dc323f456685732a4887abfb60ad5fecc96"} Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.828017 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" event={"ID":"295410e0-8c26-494c-89b5-fee76ecf0ff4","Type":"ContainerStarted","Data":"3473e38608dfc2353ec0a89b5ed19e31e4b99f9b9e9dc032bcd266a4696a7cec"} Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.890549 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" event={"ID":"762d046e-d753-4f82-afa3-90572628de64","Type":"ContainerStarted","Data":"9ad10527a7f095dc0a42d752ed09efcae99b69f2d2b53bb2858295f8d40b176a"} Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.892270 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wdrrr" podStartSLOduration=91.892211023 podStartE2EDuration="1m31.892211023s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:58.849536203 +0000 UTC m=+112.498444511" watchObservedRunningTime="2025-12-08 18:52:58.892211023 +0000 UTC m=+112.541119331" Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.900825 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z7q5s"] Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.915824 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:58 crc kubenswrapper[5004]: E1208 18:52:58.917271 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.417239247 +0000 UTC m=+113.066147555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.927732 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" event={"ID":"39fd2fcf-66db-41da-bf3b-30d991d74c76","Type":"ContainerStarted","Data":"c878b3d837fb7ae8fb17d5648d0c43c54f9943044a70d190645d2f9628227069"} Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.957483 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" event={"ID":"5d3eaa17-c643-4536-88a0-a76854e545ab","Type":"ContainerStarted","Data":"97047b8be38ead83bc35d2b0a1e92a05e65b2c38d2514a8fe2b2ea34c665e5b3"} Dec 08 18:52:58 crc kubenswrapper[5004]: I1208 18:52:58.966445 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-bxkfp" podStartSLOduration=91.966423106 podStartE2EDuration="1m31.966423106s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:58.928691125 +0000 UTC m=+112.577599453" watchObservedRunningTime="2025-12-08 18:52:58.966423106 +0000 UTC m=+112.615331414" Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.014379 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-n96v4" event={"ID":"35b855ab-7531-48c9-8924-9a291c0ae509","Type":"ContainerStarted","Data":"267d9f593a5fd70c214572a59da9419e73601eb1f98a3525abf15325763bcc3c"} Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.023492 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.024190 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.524168321 +0000 UTC m=+113.173076629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.036012 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" event={"ID":"a1aa164d-cf7a-4c71-90db-3488e29d60a2","Type":"ContainerStarted","Data":"17c94a33c2a825e5ad1ffee48b6350c8c8e5aad6dc1aa4a3596bd7e4960893ab"} Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.067638 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-jbshq" podStartSLOduration=92.067599055 podStartE2EDuration="1m32.067599055s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:59.059821325 +0000 UTC m=+112.708729633" watchObservedRunningTime="2025-12-08 18:52:59.067599055 +0000 UTC m=+112.716507363" Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.068532 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h287q"] Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.087965 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" event={"ID":"cc8857ac-4a60-413b-beab-3bc1e52a9420","Type":"ContainerStarted","Data":"88bba1ad1c4be140a185201ec0dff1e016942babbed570b8c73dff567882fa70"} Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.095834 5004 generic.go:358] "Generic (PLEG): container finished" podID="974ef9b5-cdf4-470e-8df3-f132304df404" containerID="b003ee213b46b0ed41f697154815032e6b97acc82aa99f08565cf9c4b7b1b38d" exitCode=0 Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.096457 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" event={"ID":"974ef9b5-cdf4-470e-8df3-f132304df404","Type":"ContainerDied","Data":"b003ee213b46b0ed41f697154815032e6b97acc82aa99f08565cf9c4b7b1b38d"} Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.125793 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.126153 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.626128924 +0000 UTC m=+113.275037232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.127202 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.127929 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.627918592 +0000 UTC m=+113.276826900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: W1208 18:52:59.142360 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58b8eee8_00f8_4078_a0d1_3805d336771f.slice/crio-6dcb41bff652e1428dc21b5dd6d4372d275efeb7ae815d1ee5b98184a3d2f80a WatchSource:0}: Error finding container 6dcb41bff652e1428dc21b5dd6d4372d275efeb7ae815d1ee5b98184a3d2f80a: Status 404 returned error can't find the container with id 6dcb41bff652e1428dc21b5dd6d4372d275efeb7ae815d1ee5b98184a3d2f80a Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.231800 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.232850 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.73281169 +0000 UTC m=+113.381719998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.307179 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4qzx9" podStartSLOduration=92.307150438 podStartE2EDuration="1m32.307150438s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:59.257547794 +0000 UTC m=+112.906456102" watchObservedRunningTime="2025-12-08 18:52:59.307150438 +0000 UTC m=+112.956058746" Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.334537 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.335046 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.835027553 +0000 UTC m=+113.483935861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.349208 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.349320 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.439535 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.440846 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:52:59.94082228 +0000 UTC m=+113.589730588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.546352 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.548013 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.047998072 +0000 UTC m=+113.696906380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.567554 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tk26l"] Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.567738 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" podStartSLOduration=92.567714174 podStartE2EDuration="1m32.567714174s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:59.546678379 +0000 UTC m=+113.195586687" watchObservedRunningTime="2025-12-08 18:52:59.567714174 +0000 UTC m=+113.216622482" Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.647704 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.648244 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.148213809 +0000 UTC m=+113.797122117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.700588 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww"] Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.749913 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.750412 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.25039695 +0000 UTC m=+113.899305258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.851492 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.852654 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.352628083 +0000 UTC m=+114.001536391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.873449 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-7q525" podStartSLOduration=92.8734061 podStartE2EDuration="1m32.8734061s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:59.814874601 +0000 UTC m=+113.463782909" watchObservedRunningTime="2025-12-08 18:52:59.8734061 +0000 UTC m=+113.522314408" Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.912385 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw"] Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.932866 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws"] Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.953237 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4l7n9"] Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.954542 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:52:59 crc kubenswrapper[5004]: E1208 18:52:59.955140 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.455119404 +0000 UTC m=+114.104027712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:52:59 crc kubenswrapper[5004]: I1208 18:52:59.969574 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hn6pr" podStartSLOduration=92.969530377 podStartE2EDuration="1m32.969530377s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:52:59.925014848 +0000 UTC m=+113.573923166" watchObservedRunningTime="2025-12-08 18:52:59.969530377 +0000 UTC m=+113.618438685" Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.059586 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.060453 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.560415585 +0000 UTC m=+114.209323893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.060936 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.061570 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.561562942 +0000 UTC m=+114.210471250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.063332 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-zvml8"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.121235 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.170758 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.171279 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.671254244 +0000 UTC m=+114.320162542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.172344 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.175607 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" event={"ID":"bdf0d9fe-459a-442c-b551-ba165104b4fd","Type":"ContainerStarted","Data":"c55b6f72cd2e6e2c5bcbb01eca3a8772c88dae7e8f7354a0ba11a0d40039b57c"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.187494 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" event={"ID":"5d3eaa17-c643-4536-88a0-a76854e545ab","Type":"ContainerStarted","Data":"22576f18dacc30a5d1b1f46696bb88bf70e8b52f5308ec0121df6a7721fd50e7"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.192130 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.210180 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" event={"ID":"00acb591-ba56-4806-b41d-2efe11b0637d","Type":"ContainerStarted","Data":"a1fb74bf5c417c22e3d71e2c5b8eed5f41f8fa6c2ea753abf873e7d6a3f1e04c"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.278347 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.278371 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" event={"ID":"4d5b812c-79db-4f9c-9102-a2c785563717","Type":"ContainerStarted","Data":"bcc7740a7bd0da83122a1a32d4770d4d4e95c201c4ddb2d43cf886106fd0298c"} Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.278797 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.778779067 +0000 UTC m=+114.427687375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.293811 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.309679 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" event={"ID":"765bbaba-9e29-4816-95f6-d2bc1a6fad23","Type":"ContainerStarted","Data":"7da3d4828dd239d9e6411722af138ab2ba0e08305a693e1e2c58adfbf24d068a"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.320344 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.344176 5004 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-r4pkx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.344262 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" podUID="9296f49b-35cb-4c66-afc5-a62a45480f3a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.359339 5004 patch_prober.go:28] interesting pod/console-operator-67c89758df-xlcnv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.359423 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" podUID="c3637044-6420-4219-967b-128dd2dcdfcd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.382541 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.382938 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.88290565 +0000 UTC m=+114.531813948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.385616 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.386166 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.886145695 +0000 UTC m=+114.535054003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.391954 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8cfds"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.480943 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" event={"ID":"e1f41e8f-783b-443b-b8a8-4bcd32c803c2","Type":"ContainerStarted","Data":"51dbdbe2021b238ba807d7b8eb591da721ffb8a6b3e871b74ee1624c84a6075b"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.483036 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-t7lx4" event={"ID":"b2c5e9e8-9b38-40fe-89fa-34d128ee718c","Type":"ContainerStarted","Data":"0b1a3a70d60291dbe7482612cdf608688cda3b55c6d0bcc9cfd504c910809855"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.483769 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h287q" event={"ID":"bae61685-0786-4beb-9e73-fb50660d59a6","Type":"ContainerStarted","Data":"5d1accf4fc6ac2aa518d2237dfdcb64bf522869d5cb8e7d422ecf4944c024b1a"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.484642 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" event={"ID":"d42d553c-cafa-471c-8df7-395b8463615d","Type":"ContainerStarted","Data":"4782eb6198c6afb90a4207dd05ee738834224d4e0fdd4ac1d51cd8606b36b173"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.487806 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.488183 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:00.988164361 +0000 UTC m=+114.637072669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.518462 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njvn7" event={"ID":"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c","Type":"ContainerStarted","Data":"8e6fb2c8a558b7555c578fe5bc331d292d0365049e0fe6b3a517ba36a17bdfa0"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.534764 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tk26l" event={"ID":"721f448b-095b-4d7f-a367-512851e5c6d6","Type":"ContainerStarted","Data":"5e4483a9b7f13840cc96c61fe7922b98fbec7e1d080fa680d9a6faefdeee1d99"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.535536 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-t7lx4" podStartSLOduration=93.535516461 podStartE2EDuration="1m33.535516461s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:00.531509812 +0000 UTC m=+114.180418120" watchObservedRunningTime="2025-12-08 18:53:00.535516461 +0000 UTC m=+114.184424769" Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.548030 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" event={"ID":"58b8eee8-00f8-4078-a0d1-3805d336771f","Type":"ContainerStarted","Data":"6dcb41bff652e1428dc21b5dd6d4372d275efeb7ae815d1ee5b98184a3d2f80a"} Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.567539 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.567623 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.582091 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.590055 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.594662 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-xlcnv" Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.594752 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.094724003 +0000 UTC m=+114.743632311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.696930 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.697330 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.197299096 +0000 UTC m=+114.846207404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.698787 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.754451 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.254424041 +0000 UTC m=+114.903332349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.810581 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.812116 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.312085192 +0000 UTC m=+114.960993500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: W1208 18:53:00.866153 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdbbc49a_37c4_45b0_8130_07bc71523d83.slice/crio-2f4f6309d064f981cb54d170a1348c8d627b7455cecb8fcccf69388a67f0bd62 WatchSource:0}: Error finding container 2f4f6309d064f981cb54d170a1348c8d627b7455cecb8fcccf69388a67f0bd62: Status 404 returned error can't find the container with id 2f4f6309d064f981cb54d170a1348c8d627b7455cecb8fcccf69388a67f0bd62 Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.908822 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.909244 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.909350 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.929494 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:00 crc kubenswrapper[5004]: E1208 18:53:00.930024 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.429999958 +0000 UTC m=+115.078908266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.947811 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.971133 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4"] Dec 08 18:53:00 crc kubenswrapper[5004]: I1208 18:53:00.982910 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5"] Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.030658 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.031482 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.531458237 +0000 UTC m=+115.180366545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: W1208 18:53:01.085379 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode59f94c1_696f_4a7d_9178_199ddda2363c.slice/crio-9b20687c6a645cf702166154a524bd43770606fa9d89cc842ebc9ba9db90a7ec WatchSource:0}: Error finding container 9b20687c6a645cf702166154a524bd43770606fa9d89cc842ebc9ba9db90a7ec: Status 404 returned error can't find the container with id 9b20687c6a645cf702166154a524bd43770606fa9d89cc842ebc9ba9db90a7ec Dec 08 18:53:01 crc kubenswrapper[5004]: W1208 18:53:01.091159 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8b85e4c_d122_457d_b192_1b58a5de2630.slice/crio-9feaab035c5f0391b8bb2040a47847b4223ea07c380a982ffd4e3e0d547f8a84 WatchSource:0}: Error finding container 9feaab035c5f0391b8bb2040a47847b4223ea07c380a982ffd4e3e0d547f8a84: Status 404 returned error can't find the container with id 9feaab035c5f0391b8bb2040a47847b4223ea07c380a982ffd4e3e0d547f8a84 Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.132913 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.133545 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.633523643 +0000 UTC m=+115.282431951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.243941 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.244401 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.744373204 +0000 UTC m=+115.393281502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.351334 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.352016 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.851984478 +0000 UTC m=+115.500892966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.452888 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.453244 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.95322781 +0000 UTC m=+115.602136118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.557825 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.558216 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.05819997 +0000 UTC m=+115.707108278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.597086 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" event={"ID":"a795185a-7be1-4ab8-ba7e-63a53ecc6225","Type":"ContainerStarted","Data":"ef108b291aa5a670677b6eb1113626316b008cc7751418061a20a3b5047c885e"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.604641 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8cfds" event={"ID":"5c99fc6d-0d93-47fd-87fd-9e80ada9319c","Type":"ContainerStarted","Data":"80d91428c85b61499709ed8fb32cbcc19afe6696284136dc0f0444891ebb13c3"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.610709 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" event={"ID":"e43fb53d-bb94-4fff-88db-a8cd4066d647","Type":"ContainerStarted","Data":"11d4f06a69b4a61bc0e4fc08277d06ac7adc132db256ea3bb843b397d51d1921"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.628920 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" event={"ID":"eab26793-a1ea-412a-8bb6-592aeabd824e","Type":"ContainerStarted","Data":"28dfc2c56922ff049d3fe4669b9df37211f0157e6aca2433457589453e6e7ef0"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.662531 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" event={"ID":"29320623-cb93-488c-8bbf-4ac828a43a75","Type":"ContainerStarted","Data":"e2b8b7d5feff37ec86ea5b5183a7dc754c40bca9d54e80a87e6803eb6628776f"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.663300 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.663627 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.163612015 +0000 UTC m=+115.812520323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.681729 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" event={"ID":"295410e0-8c26-494c-89b5-fee76ecf0ff4","Type":"ContainerStarted","Data":"7071328605cab942942c464e5194c41efcb82cdf538bd06a39c904603598bed7"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.691108 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" event={"ID":"bcbec4b8-c62a-4f2a-8836-dc5571403963","Type":"ContainerStarted","Data":"9a2d152f378aec07864ec32adbb7f7a5f415d3d510945a50e7a2468b068133e4"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.718307 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" event={"ID":"762d046e-d753-4f82-afa3-90572628de64","Type":"ContainerStarted","Data":"34943121fe96fc4198b52466a962d3cf4b7f0f1fcb850c3ac85dbd22487eb9b6"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.720116 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" event={"ID":"e8b85e4c-d122-457d-b192-1b58a5de2630","Type":"ContainerStarted","Data":"9feaab035c5f0391b8bb2040a47847b4223ea07c380a982ffd4e3e0d547f8a84"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.723245 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-n96v4" event={"ID":"35b855ab-7531-48c9-8924-9a291c0ae509","Type":"ContainerStarted","Data":"3257889849b55d56e77ef00e80b99fb0155ef2a6a86a0709630e73359b5474df"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.725142 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" event={"ID":"fdbbc49a-37c4-45b0-8130-07bc71523d83","Type":"ContainerStarted","Data":"2f4f6309d064f981cb54d170a1348c8d627b7455cecb8fcccf69388a67f0bd62"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.725807 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" event={"ID":"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2","Type":"ContainerStarted","Data":"b3fe8664dde685e00b9efe96ce60e23065641f16fa826ffe2ddb682c598d50e2"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.726402 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" event={"ID":"1612b92e-7bbe-499e-8162-32d2de1e36ab","Type":"ContainerStarted","Data":"9b11d5d2459bd30f7b65e92d6bc09a1ec335e45e217b2c472cfcc69120b69cda"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.727018 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" event={"ID":"e59f94c1-696f-4a7d-9178-199ddda2363c","Type":"ContainerStarted","Data":"9b20687c6a645cf702166154a524bd43770606fa9d89cc842ebc9ba9db90a7ec"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.760748 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" event={"ID":"d4540c2c-5c03-438a-ae32-89509db54eeb","Type":"ContainerStarted","Data":"de875be453872ffd3f2fb94fb0a778629d22b409f87c4105c8e3cc5ab29a39dd"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.764395 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.764692 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.264679521 +0000 UTC m=+115.913587829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.765343 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" event={"ID":"bdf0d9fe-459a-442c-b551-ba165104b4fd","Type":"ContainerStarted","Data":"9ee61d60f8e78cd88f1b9b9e8d05468321bcec5e3ba40bb70ec025a083738eec"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.768237 5004 generic.go:358] "Generic (PLEG): container finished" podID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerID="22576f18dacc30a5d1b1f46696bb88bf70e8b52f5308ec0121df6a7721fd50e7" exitCode=0 Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.768308 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" event={"ID":"5d3eaa17-c643-4536-88a0-a76854e545ab","Type":"ContainerDied","Data":"22576f18dacc30a5d1b1f46696bb88bf70e8b52f5308ec0121df6a7721fd50e7"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.778059 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" event={"ID":"e1f41e8f-783b-443b-b8a8-4bcd32c803c2","Type":"ContainerStarted","Data":"eb748ba8a183aaf7ae47c2b9336053aee93084c21f30622bb8224d73f8396b0e"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.787888 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" event={"ID":"824dc6e4-c633-4036-b85f-ed97e63ec00e","Type":"ContainerStarted","Data":"24306e59ae6699ecbd9a4b6b81b056ecfd0762e4bb49ac3e8f2476ef5bb69e19"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.806457 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" event={"ID":"1922ff11-ecff-4b61-841e-f6b9decee4fd","Type":"ContainerStarted","Data":"c51f4e5822fc95d66e6369ac18f0692a2b2be6c341d7fa5f1fa2424f86910abf"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.809754 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" event={"ID":"58b8eee8-00f8-4078-a0d1-3805d336771f","Type":"ContainerStarted","Data":"0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.811716 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" event={"ID":"b79abb7d-698b-41ba-95bf-59d9e718726a","Type":"ContainerStarted","Data":"64f82efc3014538227ab25f240e0c2236a054f6aa736ea5b654e878620cf01ac"} Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.846770 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.857288 5004 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z7q5s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.857370 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.868830 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.869002 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.368978789 +0000 UTC m=+116.017887097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.869255 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mjhc2" podStartSLOduration=94.869225917 podStartE2EDuration="1m34.869225917s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:01.863145143 +0000 UTC m=+115.512053451" watchObservedRunningTime="2025-12-08 18:53:01.869225917 +0000 UTC m=+115.518134235" Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.869406 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.870790 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.370780248 +0000 UTC m=+116.019688556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.897643 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podStartSLOduration=94.897615509 podStartE2EDuration="1m34.897615509s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:01.891853474 +0000 UTC m=+115.540761782" watchObservedRunningTime="2025-12-08 18:53:01.897615509 +0000 UTC m=+115.546523817" Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.926409 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" podStartSLOduration=93.926388733 podStartE2EDuration="1m33.926388733s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:01.923019795 +0000 UTC m=+115.571928103" watchObservedRunningTime="2025-12-08 18:53:01.926388733 +0000 UTC m=+115.575297061" Dec 08 18:53:01 crc kubenswrapper[5004]: I1208 18:53:01.971144 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[5004]: E1208 18:53:01.973807 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.473784785 +0000 UTC m=+116.122693113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.026358 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.073894 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.074494 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.574478788 +0000 UTC m=+116.223387096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.175452 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.175887 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.675867964 +0000 UTC m=+116.324776272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.239972 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:02 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:02 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:02 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.240037 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.285236 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.285550 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.785537416 +0000 UTC m=+116.434445724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.387877 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.388278 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.888257584 +0000 UTC m=+116.537165892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.466876 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-n96v4" podStartSLOduration=94.466861868 podStartE2EDuration="1m34.466861868s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:02.464973328 +0000 UTC m=+116.113881636" watchObservedRunningTime="2025-12-08 18:53:02.466861868 +0000 UTC m=+116.115770176" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.467273 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" podStartSLOduration=94.467267521 podStartE2EDuration="1m34.467267521s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:01.952936385 +0000 UTC m=+115.601844713" watchObservedRunningTime="2025-12-08 18:53:02.467267521 +0000 UTC m=+116.116175829" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.497646 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.497705 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.497731 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.497804 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.497846 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.498276 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.998259807 +0000 UTC m=+116.647168115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.604725 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.605153 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.606710 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:03.106687988 +0000 UTC m=+116.755596296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.644793 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89b69152-f317-4e7b-9215-fc6c71abc31f-metrics-certs\") pod \"network-metrics-daemon-7wmb8\" (UID: \"89b69152-f317-4e7b-9215-fc6c71abc31f\") " pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.675449 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wmb8" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.675821 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.706276 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.726405 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.727120 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:03.227099995 +0000 UTC m=+116.876008303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.789939 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.807727 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.828282 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.829102 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:03.329063709 +0000 UTC m=+116.977972017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.863259 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.878395 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.900436 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:02 crc kubenswrapper[5004]: I1208 18:53:02.930378 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:02 crc kubenswrapper[5004]: E1208 18:53:02.930761 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:03.430746294 +0000 UTC m=+117.079654602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.036121 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:03 crc kubenswrapper[5004]: E1208 18:53:03.036403 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:03.536381595 +0000 UTC m=+117.185289903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.053926 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:03 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:03 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:03 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.054005 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.141804 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:03 crc kubenswrapper[5004]: E1208 18:53:03.142324 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:03.642306848 +0000 UTC m=+117.291215156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.163449 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" event={"ID":"d4540c2c-5c03-438a-ae32-89509db54eeb","Type":"ContainerStarted","Data":"6b5a339baebb47c3ce070b03a1a1f86eab0bbd50d8365b242b6c277724669f95"} Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.348018 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:03 crc kubenswrapper[5004]: E1208 18:53:03.349051 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:03.849024935 +0000 UTC m=+117.497933263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.410122 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" event={"ID":"765bbaba-9e29-4816-95f6-d2bc1a6fad23","Type":"ContainerStarted","Data":"9d78c729fd4f886d42a946bbb58109f30c3a13b0812bf83258f4e18ca477ef20"} Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.411591 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.412247 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mchbg" podStartSLOduration=96.412232884 podStartE2EDuration="1m36.412232884s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:03.411825481 +0000 UTC m=+117.060733789" watchObservedRunningTime="2025-12-08 18:53:03.412232884 +0000 UTC m=+117.061141192" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.413008 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nfdbk" podStartSLOduration=95.413001339 podStartE2EDuration="1m35.413001339s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:02.483963797 +0000 UTC m=+116.132872105" watchObservedRunningTime="2025-12-08 18:53:03.413001339 +0000 UTC m=+117.061909647" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.414197 5004 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-jhvdw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.414264 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" podUID="765bbaba-9e29-4816-95f6-d2bc1a6fad23" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.460683 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:03 crc kubenswrapper[5004]: E1208 18:53:03.462014 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:03.961999342 +0000 UTC m=+117.610907650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.520229 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" podStartSLOduration=95.520206792 podStartE2EDuration="1m35.520206792s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:03.51423159 +0000 UTC m=+117.163139898" watchObservedRunningTime="2025-12-08 18:53:03.520206792 +0000 UTC m=+117.169115110" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.620749 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" event={"ID":"824dc6e4-c633-4036-b85f-ed97e63ec00e","Type":"ContainerStarted","Data":"251b9b0b5f3f97baf6be6cdc8f50d5826ae1de14b2109fc8a0f82c90f642e32b"} Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.660114 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:03 crc kubenswrapper[5004]: E1208 18:53:03.660542 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.160519108 +0000 UTC m=+117.809427416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.755386 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" event={"ID":"1922ff11-ecff-4b61-841e-f6b9decee4fd","Type":"ContainerStarted","Data":"84693b9bf35eb7a6dae2b316748084f412209ae6d3de6c67fa8ac327fcdc1924"} Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.756897 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h287q" event={"ID":"bae61685-0786-4beb-9e73-fb50660d59a6","Type":"ContainerStarted","Data":"6a905ef64e55e2232fe2aeec2d2d6b191b9d5314ff3038ef2e53940126027a6e"} Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.770588 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:03 crc kubenswrapper[5004]: E1208 18:53:03.770929 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.270916942 +0000 UTC m=+117.919825250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.791672 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" event={"ID":"b79abb7d-698b-41ba-95bf-59d9e718726a","Type":"ContainerStarted","Data":"680d7dbd3269eb1d711681fae15b9f44fad42b896fd32c8a9ee8983af38aaf9a"} Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.792775 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.795625 5004 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-xw8q7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.795691 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" podUID="b79abb7d-698b-41ba-95bf-59d9e718726a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.805800 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h287q" podStartSLOduration=9.805779602 podStartE2EDuration="9.805779602s" podCreationTimestamp="2025-12-08 18:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:03.804985066 +0000 UTC m=+117.453893374" watchObservedRunningTime="2025-12-08 18:53:03.805779602 +0000 UTC m=+117.454687920" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.844214 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" event={"ID":"974ef9b5-cdf4-470e-8df3-f132304df404","Type":"ContainerStarted","Data":"0ede47637b48cffe0a56f5c07e20c0b0a3fe28f8c45530d4a5eb2df2d39e3b63"} Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.890414 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:03 crc kubenswrapper[5004]: E1208 18:53:03.901825 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.401793034 +0000 UTC m=+118.050701342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.908494 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" podStartSLOduration=95.908475699 podStartE2EDuration="1m35.908475699s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:03.904227412 +0000 UTC m=+117.553135720" watchObservedRunningTime="2025-12-08 18:53:03.908475699 +0000 UTC m=+117.557384007" Dec 08 18:53:03 crc kubenswrapper[5004]: I1208 18:53:03.975900 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" event={"ID":"29320623-cb93-488c-8bbf-4ac828a43a75","Type":"ContainerStarted","Data":"0c4ebbe44b6cedf57af54791f2b39221939d67357b51a404adc5fd4cee302d31"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.002938 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.003748 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.503727098 +0000 UTC m=+118.152635466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.073380 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:04 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:04 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:04 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.073477 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.074005 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" event={"ID":"bcbec4b8-c62a-4f2a-8836-dc5571403963","Type":"ContainerStarted","Data":"432c87f8565bfd24f64609b10079e341be43bcb30504266c64ff39db35eadc51"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.090256 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" podStartSLOduration=97.090239126 podStartE2EDuration="1m37.090239126s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:04.047846124 +0000 UTC m=+117.696754432" watchObservedRunningTime="2025-12-08 18:53:04.090239126 +0000 UTC m=+117.739147434" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.103902 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.105414 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.605393273 +0000 UTC m=+118.254301581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.140725 5004 ???:1] "http: TLS handshake error from 192.168.126.11:52508: no serving certificate available for the kubelet" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.203045 5004 generic.go:358] "Generic (PLEG): container finished" podID="39fd2fcf-66db-41da-bf3b-30d991d74c76" containerID="ba49c57c11d4a34f74ecafa24de2935c2acca758d68d80e1379249524856edb4" exitCode=0 Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.203167 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" event={"ID":"39fd2fcf-66db-41da-bf3b-30d991d74c76","Type":"ContainerDied","Data":"ba49c57c11d4a34f74ecafa24de2935c2acca758d68d80e1379249524856edb4"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.213292 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.221713 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.721694988 +0000 UTC m=+118.370603296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.279598 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" event={"ID":"00acb591-ba56-4806-b41d-2efe11b0637d","Type":"ContainerStarted","Data":"a7b0e301f0f0fb7a25b8ec37a12b46689bfe443ddcd030c382aa355529b984ba"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.324794 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.325629 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.825605585 +0000 UTC m=+118.474513893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.327278 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" event={"ID":"4d5b812c-79db-4f9c-9102-a2c785563717","Type":"ContainerStarted","Data":"6190b4e1c045cfa34562d9b858b6b6abf649f969ee5ecdcb717e2e01dedb6322"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.328396 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.373866 5004 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-chbws container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.373925 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" podUID="4d5b812c-79db-4f9c-9102-a2c785563717" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.377370 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" event={"ID":"a1aa164d-cf7a-4c71-90db-3488e29d60a2","Type":"ContainerStarted","Data":"07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.377621 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.439668 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.440045 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.940028659 +0000 UTC m=+118.588936967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.548890 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.549577 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:05.049541175 +0000 UTC m=+118.698449483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.564314 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" event={"ID":"d42d553c-cafa-471c-8df7-395b8463615d","Type":"ContainerStarted","Data":"7c4d36316771ddeefcb441e7a27ab8d937733ab6ae6a3ff3e6c9e84fdb694f4c"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.566323 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" event={"ID":"fdbbc49a-37c4-45b0-8130-07bc71523d83","Type":"ContainerStarted","Data":"d4817fae5f449b4d6832eb666a95df30ac1f94c04b30b8b749b369925de36534"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.570886 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njvn7" event={"ID":"bcfacaf1-601b-4cb6-9c0e-528f2e5d655c","Type":"ContainerStarted","Data":"8912e3b20623ecd0d893230486ff3c3e8ba98e5c23e7ebb876e7110a62142bbc"} Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.570922 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.573033 5004 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z7q5s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.574886 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.657118 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.660794 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:05.160776827 +0000 UTC m=+118.809685125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.671701 5004 ???:1] "http: TLS handshake error from 192.168.126.11:52512: no serving certificate available for the kubelet" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.673251 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-jcs6x" podStartSLOduration=96.673233957 podStartE2EDuration="1m36.673233957s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:04.184017308 +0000 UTC m=+117.832925616" watchObservedRunningTime="2025-12-08 18:53:04.673233957 +0000 UTC m=+118.322142265" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.675442 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-xsdsz" podStartSLOduration=96.675431348 podStartE2EDuration="1m36.675431348s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:04.674249939 +0000 UTC m=+118.323158257" watchObservedRunningTime="2025-12-08 18:53:04.675431348 +0000 UTC m=+118.324339656" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.758672 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.759062 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:05.259039163 +0000 UTC m=+118.907947471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.934654 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.936188 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:04 crc kubenswrapper[5004]: E1208 18:53:04.936475 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:05.4364616 +0000 UTC m=+119.085369908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[5004]: I1208 18:53:04.944145 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.038431 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:05 crc kubenswrapper[5004]: E1208 18:53:05.039030 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:05.539008203 +0000 UTC m=+119.187916521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.075648 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:05 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:05 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:05 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.075714 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.088425 5004 ???:1] "http: TLS handshake error from 192.168.126.11:52514: no serving certificate available for the kubelet" Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.143765 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:05 crc kubenswrapper[5004]: E1208 18:53:05.144169 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:05.64415441 +0000 UTC m=+119.293062718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.229269 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-njvn7" podStartSLOduration=11.229246462 podStartE2EDuration="11.229246462s" podCreationTimestamp="2025-12-08 18:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:04.935674915 +0000 UTC m=+118.584583233" watchObservedRunningTime="2025-12-08 18:53:05.229246462 +0000 UTC m=+118.878154770" Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.245702 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:05 crc kubenswrapper[5004]: E1208 18:53:05.245890 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:05.745863265 +0000 UTC m=+119.394771583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.246125 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:05 crc kubenswrapper[5004]: E1208 18:53:05.246530 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:05.746521907 +0000 UTC m=+119.395430225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.501843 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:05 crc kubenswrapper[5004]: E1208 18:53:05.502730 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:06.002699973 +0000 UTC m=+119.651608281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.503366 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" podStartSLOduration=97.503339473 podStartE2EDuration="1m37.503339473s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:05.500874674 +0000 UTC m=+119.149783002" watchObservedRunningTime="2025-12-08 18:53:05.503339473 +0000 UTC m=+119.152247781" Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.673425 5004 ???:1] "http: TLS handshake error from 192.168.126.11:55786: no serving certificate available for the kubelet" Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.676128 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.676197 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.676137 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:05 crc kubenswrapper[5004]: E1208 18:53:05.676477 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:06.176459772 +0000 UTC m=+119.825368080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.827573 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:05 crc kubenswrapper[5004]: E1208 18:53:05.828179 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:06.328147173 +0000 UTC m=+119.977055481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.905275 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" podStartSLOduration=11.905171147 podStartE2EDuration="11.905171147s" podCreationTimestamp="2025-12-08 18:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:05.899235276 +0000 UTC m=+119.548143584" watchObservedRunningTime="2025-12-08 18:53:05.905171147 +0000 UTC m=+119.554079455" Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.929512 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:05 crc kubenswrapper[5004]: E1208 18:53:05.930005 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:06.429985303 +0000 UTC m=+120.078893611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:05 crc kubenswrapper[5004]: I1208 18:53:05.980533 5004 ???:1] "http: TLS handshake error from 192.168.126.11:55790: no serving certificate available for the kubelet" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.017882 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" event={"ID":"a795185a-7be1-4ab8-ba7e-63a53ecc6225","Type":"ContainerStarted","Data":"038b5fed4cc18409f3ddd561a564258b760e54072170d32fedf07aae723e59c0"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.065843 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:06 crc kubenswrapper[5004]: E1208 18:53:06.066114 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:06.566095593 +0000 UTC m=+120.215003901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.104105 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8cfds" event={"ID":"5c99fc6d-0d93-47fd-87fd-9e80ada9319c","Type":"ContainerStarted","Data":"53d934f6f673ab916d294bb739d0f7b0fa313c8307fef9c35a100a5135472d57"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.138461 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" podStartSLOduration=98.138428326 podStartE2EDuration="1m38.138428326s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:05.963709966 +0000 UTC m=+119.612618274" watchObservedRunningTime="2025-12-08 18:53:06.138428326 +0000 UTC m=+119.787336634" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.143527 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-tnw6x" event={"ID":"e43fb53d-bb94-4fff-88db-a8cd4066d647","Type":"ContainerStarted","Data":"f9c71bb026685821d73d5a97d8e8a2f70af32841c5383b38f94af1b024e87ed0"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.171143 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:06 crc kubenswrapper[5004]: E1208 18:53:06.174805 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:06.674787784 +0000 UTC m=+120.323696102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.177178 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" event={"ID":"762d046e-d753-4f82-afa3-90572628de64","Type":"ContainerStarted","Data":"4acb4a3624efe6ddaf3b3e01a2cd6d529630f0d67660fbea480d6dabd7dbdeb3"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.179922 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" event={"ID":"e8b85e4c-d122-457d-b192-1b58a5de2630","Type":"ContainerStarted","Data":"c6cd05b0288cc87dbddc02b9325e9b9c6c89e4b871f6ac2cd65c9c9727d805a0"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.181790 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" event={"ID":"1eedd08c-0f7f-4181-b8a3-e80d7f81c2a2","Type":"ContainerStarted","Data":"c38a2f800d603c5d7e3bba30d6cefe05968a5ffcf43ab17beb315eb2ab7b18a6"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.184505 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" event={"ID":"1612b92e-7bbe-499e-8162-32d2de1e36ab","Type":"ContainerStarted","Data":"d2b7b4346662bc91b8c7a462e845177a6a52534b83ee9b8975dff0e5c492f48b"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.187509 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" event={"ID":"e59f94c1-696f-4a7d-9178-199ddda2363c","Type":"ContainerStarted","Data":"28fd9bb6e3a78840a91101690364be79929ef02d86ecc01a83f8d38a4b10f451"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.189595 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" event={"ID":"5d3eaa17-c643-4536-88a0-a76854e545ab","Type":"ContainerStarted","Data":"60e34d6e132bcae4faba1b0e259a9e37401c31b29ce14400fdd829d8116d6140"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.190297 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.194255 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" event={"ID":"824dc6e4-c633-4036-b85f-ed97e63ec00e","Type":"ContainerStarted","Data":"516ee5780df137a9ff64e6fea2152c717ad24d5c57578a584d504072dc85127d"} Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.198594 5004 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-jhvdw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.198705 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" podUID="765bbaba-9e29-4816-95f6-d2bc1a6fad23" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.199159 5004 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-xw8q7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.199256 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" podUID="b79abb7d-698b-41ba-95bf-59d9e718726a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.199372 5004 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z7q5s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.199403 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.204362 5004 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-chbws container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.204406 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" podUID="4d5b812c-79db-4f9c-9102-a2c785563717" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.311687 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:06 crc kubenswrapper[5004]: E1208 18:53:06.312050 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:06.81201903 +0000 UTC m=+120.460927338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.398931 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:06 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:06 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:06 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.399039 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.426137 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:06 crc kubenswrapper[5004]: E1208 18:53:06.433331 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:06.933298955 +0000 UTC m=+120.582207263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.449850 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.530592 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:06 crc kubenswrapper[5004]: E1208 18:53:06.530804 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.030787505 +0000 UTC m=+120.679695813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.535593 5004 ???:1] "http: TLS handshake error from 192.168.126.11:55804: no serving certificate available for the kubelet" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.686861 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:06 crc kubenswrapper[5004]: E1208 18:53:06.687913 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.18788278 +0000 UTC m=+120.836791088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.789277 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:06 crc kubenswrapper[5004]: E1208 18:53:06.789781 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.289759061 +0000 UTC m=+120.938667369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.791883 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podStartSLOduration=99.791856758 podStartE2EDuration="1m39.791856758s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:06.742222715 +0000 UTC m=+120.391131033" watchObservedRunningTime="2025-12-08 18:53:06.791856758 +0000 UTC m=+120.440765066" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.792680 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-p8vc4" podStartSLOduration=98.792671545 podStartE2EDuration="1m38.792671545s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:06.600332058 +0000 UTC m=+120.249240366" watchObservedRunningTime="2025-12-08 18:53:06.792671545 +0000 UTC m=+120.441579853" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.802927 5004 patch_prober.go:28] interesting pod/console-64d44f6ddf-t7lx4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.803476 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-t7lx4" podUID="b2c5e9e8-9b38-40fe-89fa-34d128ee718c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.834108 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.834329 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.834422 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkxw8"] Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.863679 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tgfjp" podStartSLOduration=98.863652843 podStartE2EDuration="1m38.863652843s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:06.815452066 +0000 UTC m=+120.464360374" watchObservedRunningTime="2025-12-08 18:53:06.863652843 +0000 UTC m=+120.512561151" Dec 08 18:53:06 crc kubenswrapper[5004]: I1208 18:53:06.891891 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:06.983120 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dns59" podStartSLOduration=98.98310042 podStartE2EDuration="1m38.98310042s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:06.869057177 +0000 UTC m=+120.517965485" watchObservedRunningTime="2025-12-08 18:53:06.98310042 +0000 UTC m=+120.632008728" Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:06.990993 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.490967902 +0000 UTC m=+121.139876210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:06.993854 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.000397 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.500357384 +0000 UTC m=+121.149265702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.001530 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.002854 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.502842374 +0000 UTC m=+121.151750682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.044402 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-sq7b5" podStartSLOduration=100.044369677 podStartE2EDuration="1m40.044369677s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:07.018683772 +0000 UTC m=+120.667592100" watchObservedRunningTime="2025-12-08 18:53:07.044369677 +0000 UTC m=+120.693277985" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.054415 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.068389 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:07 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:07 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:07 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.068472 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.102531 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.105981 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.605958324 +0000 UTC m=+121.254866632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.131603 5004 ???:1] "http: TLS handshake error from 192.168.126.11:55820: no serving certificate available for the kubelet" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.209113 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.211055 5004 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-jhvdw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.211131 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" podUID="765bbaba-9e29-4816-95f6-d2bc1a6fad23" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.250367 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.75034042 +0000 UTC m=+121.399248728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.256470 5004 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z7q5s container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.256541 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.320682 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.321590 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.821551947 +0000 UTC m=+121.470460255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.338543 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" event={"ID":"1922ff11-ecff-4b61-841e-f6b9decee4fd","Type":"ContainerStarted","Data":"916a9853340f39be248a44f728d1aea245dc393fe62841d646f43540d3d4834f"} Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.396729 5004 ???:1] "http: TLS handshake error from 192.168.126.11:55822: no serving certificate available for the kubelet" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.398461 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"16590d376c7ebda57889c22c359b91cf0e578a43d1342b4e75b3398b57a920a8"} Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.407229 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-88whn" podStartSLOduration=99.407193367 podStartE2EDuration="1m39.407193367s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:07.398605881 +0000 UTC m=+121.047514189" watchObservedRunningTime="2025-12-08 18:53:07.407193367 +0000 UTC m=+121.056101705" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.423190 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.424110 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:07.924062458 +0000 UTC m=+121.572970936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.573257 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.574400 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.074359545 +0000 UTC m=+121.723267853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.574758 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.575271 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.075259113 +0000 UTC m=+121.724167421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.588359 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" event={"ID":"974ef9b5-cdf4-470e-8df3-f132304df404","Type":"ContainerStarted","Data":"b89471b38ef00cd0b982fb0b759ff2c41887bb4c01a94a3c4077acd062154114"} Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.603822 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" event={"ID":"d42d553c-cafa-471c-8df7-395b8463615d","Type":"ContainerStarted","Data":"1f68306f9f569220058e2deb55355160c31d18bcb1be52655034d716e39dfdec"} Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.604941 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.663941 5004 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-wqg6t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.664018 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.679707 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.681135 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.181116933 +0000 UTC m=+121.830025231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.726101 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-xw8q7" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.770324 5004 ???:1] "http: TLS handshake error from 192.168.126.11:55826: no serving certificate available for the kubelet" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.783084 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.783699 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.283679546 +0000 UTC m=+121.932587854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.793596 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jhvdw" Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.889001 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.889778 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.389752852 +0000 UTC m=+122.038661160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:07 crc kubenswrapper[5004]: I1208 18:53:07.991584 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:07 crc kubenswrapper[5004]: E1208 18:53:07.992579 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.492559433 +0000 UTC m=+122.141467741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.101303 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:08 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:08 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:08 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.101464 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.102485 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:08 crc kubenswrapper[5004]: E1208 18:53:08.103032 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.60300505 +0000 UTC m=+122.251913358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.193013 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zmngf" podStartSLOduration=101.1929851 podStartE2EDuration="1m41.1929851s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:07.720993883 +0000 UTC m=+121.369902181" watchObservedRunningTime="2025-12-08 18:53:08.1929851 +0000 UTC m=+121.841893408" Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.320337 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:08 crc kubenswrapper[5004]: E1208 18:53:08.320929 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.820907157 +0000 UTC m=+122.469815465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.338274 5004 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-chbws container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.338359 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" podUID="4d5b812c-79db-4f9c-9102-a2c785563717" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.421802 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:08 crc kubenswrapper[5004]: E1208 18:53:08.423310 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.923290264 +0000 UTC m=+122.572198572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.424028 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" podStartSLOduration=100.423983367 podStartE2EDuration="1m40.423983367s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:08.19488158 +0000 UTC m=+121.843789898" watchObservedRunningTime="2025-12-08 18:53:08.423983367 +0000 UTC m=+122.072891675" Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.490460 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-zvml8" podStartSLOduration=100.49042091 podStartE2EDuration="1m40.49042091s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:08.420176224 +0000 UTC m=+122.069084552" watchObservedRunningTime="2025-12-08 18:53:08.49042091 +0000 UTC m=+122.139329218" Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.527374 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:08 crc kubenswrapper[5004]: E1208 18:53:08.527847 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.027835102 +0000 UTC m=+122.676743410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.667207 5004 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-chbws container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": context deadline exceeded" start-of-body= Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.667340 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" podUID="4d5b812c-79db-4f9c-9102-a2c785563717" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": context deadline exceeded" Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.674213 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7wmb8"] Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.692425 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:08 crc kubenswrapper[5004]: E1208 18:53:08.693356 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.193326916 +0000 UTC m=+122.842235224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.694671 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" podStartSLOduration=100.694652958 podStartE2EDuration="1m40.694652958s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:08.669878353 +0000 UTC m=+122.318786651" watchObservedRunningTime="2025-12-08 18:53:08.694652958 +0000 UTC m=+122.343561276" Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.793525 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4e604d9f7ddccf6a8d33f1e63a387ec96b985f06ba8ddd46fd08c5aab95e11e6"} Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.793599 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"1ac03df8f92322109e6b8f86f5c4e5fd39e00cc95c0a3ede01e287b0703e9a27"} Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.793617 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8cfds" event={"ID":"5c99fc6d-0d93-47fd-87fd-9e80ada9319c","Type":"ContainerStarted","Data":"996e4c6a239c58c1e23d2ec9cd79529d403a8152195cc6151b3f542213808513"} Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.799441 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"07bffe15f60c705ca6051d31012044fe77842b6f636d381b0400463c0f566c66"} Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.801454 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:08 crc kubenswrapper[5004]: E1208 18:53:08.802372 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.302346767 +0000 UTC m=+122.951255075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.897920 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" event={"ID":"39fd2fcf-66db-41da-bf3b-30d991d74c76","Type":"ContainerStarted","Data":"0bdb45d88b7a9dd0f5d6347d431f9cd329b27861cff1ec05080bca9d04bbdf8b"} Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.920926 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:08 crc kubenswrapper[5004]: E1208 18:53:08.921791 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.421588345 +0000 UTC m=+123.070496653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.925036 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" event={"ID":"00acb591-ba56-4806-b41d-2efe11b0637d","Type":"ContainerStarted","Data":"8b3b277fa7a8879a00503c47973bbd39a8a222b6fa08dbc16ceda816b959b848"} Dec 08 18:53:08 crc kubenswrapper[5004]: I1208 18:53:08.928265 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" event={"ID":"e59f94c1-696f-4a7d-9178-199ddda2363c","Type":"ContainerStarted","Data":"a827c28448e0fcff7ca06098a9562e93efef5e82a6c17f2384325cf664d2e120"} Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.054766 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.054863 5004 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-wqg6t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.055274 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.055432 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.555409342 +0000 UTC m=+123.204317650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.076091 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:09 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:09 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:09 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.076201 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.156232 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.156497 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.656478057 +0000 UTC m=+123.305386365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.200626 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" gracePeriod=30 Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.201814 5004 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-wqg6t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.201847 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.257517 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.258051 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.758018698 +0000 UTC m=+123.406927006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.363252 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.365082 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.865020174 +0000 UTC m=+123.513928612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.365299 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.367724 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.867711811 +0000 UTC m=+123.516620199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.475842 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.476293 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:09.976267256 +0000 UTC m=+123.625175564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.572138 5004 ???:1] "http: TLS handshake error from 192.168.126.11:55836: no serving certificate available for the kubelet" Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.577962 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.578510 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.078475049 +0000 UTC m=+123.727383357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.645779 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" podStartSLOduration=101.645754059 podStartE2EDuration="1m41.645754059s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:09.510693222 +0000 UTC m=+123.159601530" watchObservedRunningTime="2025-12-08 18:53:09.645754059 +0000 UTC m=+123.294662367" Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.646147 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-c2wzq" podStartSLOduration=102.646138931 podStartE2EDuration="1m42.646138931s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:09.642112272 +0000 UTC m=+123.291020590" watchObservedRunningTime="2025-12-08 18:53:09.646138931 +0000 UTC m=+123.295047239" Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.681676 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.681903 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.181883179 +0000 UTC m=+123.830791487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.760266 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-4l7n9" podStartSLOduration=101.760248816 podStartE2EDuration="1m41.760248816s" podCreationTimestamp="2025-12-08 18:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:09.759918885 +0000 UTC m=+123.408827193" watchObservedRunningTime="2025-12-08 18:53:09.760248816 +0000 UTC m=+123.409157124" Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.785854 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.786436 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.286412725 +0000 UTC m=+123.935321033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.886936 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.887184 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.3871569 +0000 UTC m=+124.036065208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:09 crc kubenswrapper[5004]: I1208 18:53:09.887343 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:09 crc kubenswrapper[5004]: E1208 18:53:09.887741 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.387722909 +0000 UTC m=+124.036631227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.010308 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.010607 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.510587454 +0000 UTC m=+124.159495762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.013516 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"328ee20cea243bbc247a7910166d20b110abb4b49158a08d4a5f7968e5b670ba"} Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.023554 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" event={"ID":"89b69152-f317-4e7b-9215-fc6c71abc31f","Type":"ContainerStarted","Data":"5a8a5ba4c6d03c5ad148ce05abe1f08689424d36a57a79def08c95a55ea7a730"} Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.023663 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.023964 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-8cfds" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.034305 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:10 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:10 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:10 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.034406 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.113872 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.116976 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.616950499 +0000 UTC m=+124.265858807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.215043 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.215611 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.715591197 +0000 UTC m=+124.364499505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.317058 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.317727 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.817699396 +0000 UTC m=+124.466607704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.429121 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.429456 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.929417684 +0000 UTC m=+124.578325992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.429646 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.430328 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:10.930309461 +0000 UTC m=+124.579217769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.431957 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8cfds" podStartSLOduration=16.431928714 podStartE2EDuration="16.431928714s" podCreationTimestamp="2025-12-08 18:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:10.281644118 +0000 UTC m=+123.930552416" watchObservedRunningTime="2025-12-08 18:53:10.431928714 +0000 UTC m=+124.080837022" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.531142 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.531300 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.031259453 +0000 UTC m=+124.680167761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.531465 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.532133 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.032125181 +0000 UTC m=+124.681033489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.549825 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.549912 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.633333 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.633650 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.133633961 +0000 UTC m=+124.782542269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.723635 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.723669 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.738194 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.738591 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.23857424 +0000 UTC m=+124.887482548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.763020 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rg666"] Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.776667 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.788740 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.803006 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rg666"] Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.840383 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.840570 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-catalog-content\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.840727 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-utilities\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.840825 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txws6\" (UniqueName: \"kubernetes.io/projected/a334e99e-c733-444f-909c-978afa75eea2-kube-api-access-txws6\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.840947 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.340929207 +0000 UTC m=+124.989837515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.919401 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v879b"] Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.940883 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.946263 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-txws6\" (UniqueName: \"kubernetes.io/projected/a334e99e-c733-444f-909c-978afa75eea2-kube-api-access-txws6\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.946311 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-catalog-content\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.946386 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.946425 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-utilities\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.946964 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-utilities\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.947306 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-catalog-content\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:10 crc kubenswrapper[5004]: E1208 18:53:10.947373 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.447340374 +0000 UTC m=+125.096248842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.958934 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 18:53:10 crc kubenswrapper[5004]: I1208 18:53:10.960468 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v879b"] Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.036712 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-txws6\" (UniqueName: \"kubernetes.io/projected/a334e99e-c733-444f-909c-978afa75eea2-kube-api-access-txws6\") pod \"certified-operators-rg666\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.043345 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:11 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:11 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:11 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.043421 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.053768 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.054222 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-utilities\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.054283 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l28mq\" (UniqueName: \"kubernetes.io/projected/aab8b6c5-e160-4589-b8d8-34647c504c26-kube-api-access-l28mq\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.054322 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-catalog-content\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.054471 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.554444342 +0000 UTC m=+125.203352650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.106312 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"c2466d7064958e9c727f432ba496b2761c157a1d7a8a8ba3abfe8c355e1c2cbf"} Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.123557 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.156441 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" event={"ID":"89b69152-f317-4e7b-9215-fc6c71abc31f","Type":"ContainerStarted","Data":"5ca8582f39b33efe68d9de5a7512aef5923b81fbccd8577281bbbc6e3a59965f"} Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.184102 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l28mq\" (UniqueName: \"kubernetes.io/projected/aab8b6c5-e160-4589-b8d8-34647c504c26-kube-api-access-l28mq\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.184171 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-catalog-content\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.184210 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.184269 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-utilities\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.184739 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-utilities\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.186192 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-catalog-content\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.186444 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.686428001 +0000 UTC m=+125.335336309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.242576 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-scjp4"] Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.288783 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.291410 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.292635 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.792618441 +0000 UTC m=+125.441526749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.349668 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zhs6h"] Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.357336 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.393050 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-utilities\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.393181 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-catalog-content\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.393221 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.393244 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmm2\" (UniqueName: \"kubernetes.io/projected/c2312b49-de56-41e9-b8cd-8786f68696b7-kube-api-access-hdmm2\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.393638 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:11.893624144 +0000 UTC m=+125.542532452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.443781 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l28mq\" (UniqueName: \"kubernetes.io/projected/aab8b6c5-e160-4589-b8d8-34647c504c26-kube-api-access-l28mq\") pod \"community-operators-v879b\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.501003 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-scjp4"] Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.501871 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.509387 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.00935209 +0000 UTC m=+125.658260398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.509649 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq66s\" (UniqueName: \"kubernetes.io/projected/35ec334c-b741-473a-93e8-a588e1102c6a-kube-api-access-lq66s\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.509752 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-catalog-content\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.509808 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-catalog-content\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.509854 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.509880 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hdmm2\" (UniqueName: \"kubernetes.io/projected/c2312b49-de56-41e9-b8cd-8786f68696b7-kube-api-access-hdmm2\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.510057 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-utilities\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.510156 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-utilities\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.510869 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-catalog-content\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.511229 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.01122026 +0000 UTC m=+125.660128568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.511806 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-utilities\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.615943 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.616721 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-utilities\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.616788 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lq66s\" (UniqueName: \"kubernetes.io/projected/35ec334c-b741-473a-93e8-a588e1102c6a-kube-api-access-lq66s\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.616825 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-catalog-content\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.617529 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-catalog-content\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.628657 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-utilities\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.645531 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.145488501 +0000 UTC m=+125.794396819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.659504 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v879b" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.700111 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdmm2\" (UniqueName: \"kubernetes.io/projected/c2312b49-de56-41e9-b8cd-8786f68696b7-kube-api-access-hdmm2\") pod \"certified-operators-scjp4\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.725411 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhs6h"] Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.732182 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.732621 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.232605569 +0000 UTC m=+125.881513877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.832783 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq66s\" (UniqueName: \"kubernetes.io/projected/35ec334c-b741-473a-93e8-a588e1102c6a-kube-api-access-lq66s\") pod \"community-operators-zhs6h\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.833344 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.833452 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.333436277 +0000 UTC m=+125.982344585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.833627 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.833914 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.333907482 +0000 UTC m=+125.982815780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.959096 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:11 crc kubenswrapper[5004]: E1208 18:53:11.959531 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.459512055 +0000 UTC m=+126.108420363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.960663 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:53:11 crc kubenswrapper[5004]: I1208 18:53:11.965409 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.052404 5004 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-wqg6t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.052458 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.061028 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.061354 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.561342116 +0000 UTC m=+126.210250424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.090348 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.091680 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.105911 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:12 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:12 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:12 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.106382 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.170283 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.171472 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.671453141 +0000 UTC m=+126.320361459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.192283 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7wmb8" event={"ID":"89b69152-f317-4e7b-9215-fc6c71abc31f","Type":"ContainerStarted","Data":"c854b2c1c61a105cf27cc6a808cd9a98a8a22873d2b7bbc9f4393e64b95d7f55"} Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.202969 5004 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-wqg6t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.203051 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.274190 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.275397 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.775382128 +0000 UTC m=+126.424290436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.374918 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.375404 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.87538337 +0000 UTC m=+126.524291678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.482061 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.482446 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:12.982434817 +0000 UTC m=+126.631343125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.590002 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.590327 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.09027862 +0000 UTC m=+126.739186938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.590861 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.591498 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.091474078 +0000 UTC m=+126.740382386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.694635 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.699697 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.199661333 +0000 UTC m=+126.848569631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.826904 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.827240 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.327226469 +0000 UTC m=+126.976134777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.900724 5004 ???:1] "http: TLS handshake error from 192.168.126.11:55846: no serving certificate available for the kubelet" Dec 08 18:53:12 crc kubenswrapper[5004]: I1208 18:53:12.928060 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:12 crc kubenswrapper[5004]: E1208 18:53:12.928463 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.428437759 +0000 UTC m=+127.077346067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.031865 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.035343 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.535320701 +0000 UTC m=+127.184229009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.038499 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:13 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:13 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:13 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.038602 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.128615 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7wmb8" podStartSLOduration=106.128596466 podStartE2EDuration="1m46.128596466s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:12.827676013 +0000 UTC m=+126.476584321" watchObservedRunningTime="2025-12-08 18:53:13.128596466 +0000 UTC m=+126.777504774" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.128888 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fkpfb"] Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.140308 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.154669 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.155327 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-utilities\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.155526 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-catalog-content\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.155556 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czptg\" (UniqueName: \"kubernetes.io/projected/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-kube-api-access-czptg\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.155947 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.655928904 +0000 UTC m=+127.304837212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.169561 5004 status_manager.go:895] "Failed to get status for pod" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" pod="openshift-marketplace/redhat-marketplace-fkpfb" err="pods \"redhat-marketplace-fkpfb\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.169633 5004 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"redhat-marketplace-dockercfg-gg4w7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" type="*v1.Secret" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.219916 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkpfb"] Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.225660 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tk26l" event={"ID":"721f448b-095b-4d7f-a367-512851e5c6d6","Type":"ContainerStarted","Data":"262e34a2f81cc387faadff2a3dc1eed92198a8f2cf4a7bcd912a59f19afa6529"} Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.256351 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-utilities\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.256410 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.256524 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-catalog-content\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.256744 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-czptg\" (UniqueName: \"kubernetes.io/projected/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-kube-api-access-czptg\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.257617 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.757564977 +0000 UTC m=+127.406473455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.257660 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-utilities\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.257840 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-catalog-content\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.357309 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.357764 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:13.857747004 +0000 UTC m=+127.506655312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.375679 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-czptg\" (UniqueName: \"kubernetes.io/projected/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-kube-api-access-czptg\") pod \"redhat-marketplace-fkpfb\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.507102 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.507428 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.00741613 +0000 UTC m=+127.656324438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.534215 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lt66j"] Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.543369 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.607985 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.608279 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj964\" (UniqueName: \"kubernetes.io/projected/1bb3b4ef-469e-4926-a259-48411ff90d77-kube-api-access-sj964\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.608326 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-catalog-content\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.608401 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-utilities\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.608543 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.108523387 +0000 UTC m=+127.757431695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.623286 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lt66j"] Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.773282 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sj964\" (UniqueName: \"kubernetes.io/projected/1bb3b4ef-469e-4926-a259-48411ff90d77-kube-api-access-sj964\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.773809 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.773846 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-catalog-content\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.773942 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-utilities\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.778909 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.278827445 +0000 UTC m=+127.927735753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.780560 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-catalog-content\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.789277 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-utilities\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.876177 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj964\" (UniqueName: \"kubernetes.io/projected/1bb3b4ef-469e-4926-a259-48411ff90d77-kube-api-access-sj964\") pod \"redhat-marketplace-lt66j\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.887552 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.888321 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.38828483 +0000 UTC m=+128.037193218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.888579 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.889198 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.389185189 +0000 UTC m=+128.038093507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.921694 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h9jcq"] Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.977450 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.990837 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.991042 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhs2x\" (UniqueName: \"kubernetes.io/projected/0196edda-a1e0-4e11-b84d-15988bdf3507-kube-api-access-nhs2x\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.991113 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-catalog-content\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:13 crc kubenswrapper[5004]: I1208 18:53:13.991179 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-utilities\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:13 crc kubenswrapper[5004]: E1208 18:53:13.991276 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.491259816 +0000 UTC m=+128.140168124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.002422 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.047299 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:14 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:14 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:14 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.047422 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.060532 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h9jcq"] Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.093083 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.093141 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-utilities\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.093176 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhs2x\" (UniqueName: \"kubernetes.io/projected/0196edda-a1e0-4e11-b84d-15988bdf3507-kube-api-access-nhs2x\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.093229 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-catalog-content\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.093853 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-catalog-content\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.094271 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.594251684 +0000 UTC m=+128.243159992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.094544 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-utilities\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.160956 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8cfds" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.179471 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhs2x\" (UniqueName: \"kubernetes.io/projected/0196edda-a1e0-4e11-b84d-15988bdf3507-kube-api-access-nhs2x\") pod \"redhat-operators-h9jcq\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.191777 5004 patch_prober.go:28] interesting pod/apiserver-8596bd845d-dwxjt container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 18:53:14 crc kubenswrapper[5004]: [+]log ok Dec 08 18:53:14 crc kubenswrapper[5004]: [+]etcd ok Dec 08 18:53:14 crc kubenswrapper[5004]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 18:53:14 crc kubenswrapper[5004]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Dec 08 18:53:14 crc kubenswrapper[5004]: [+]poststarthook/max-in-flight-filter ok Dec 08 18:53:14 crc kubenswrapper[5004]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 18:53:14 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-StartUserInformer ok Dec 08 18:53:14 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-StartOAuthInformer ok Dec 08 18:53:14 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Dec 08 18:53:14 crc kubenswrapper[5004]: livez check failed Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.191859 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" podUID="39fd2fcf-66db-41da-bf3b-30d991d74c76" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.197620 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.198937 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.698918805 +0000 UTC m=+128.347827113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.208952 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t7l7m"] Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.236638 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.299335 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9spd7\" (UniqueName: \"kubernetes.io/projected/a4169fd9-a66b-4a3f-beca-26641d59434b-kube-api-access-9spd7\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.299382 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-utilities\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.299437 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.299503 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-catalog-content\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.299824 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.799811994 +0000 UTC m=+128.448720302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.327564 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.335342 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7l7m"] Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.400963 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.401160 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-catalog-content\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.401211 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9spd7\" (UniqueName: \"kubernetes.io/projected/a4169fd9-a66b-4a3f-beca-26641d59434b-kube-api-access-9spd7\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.401241 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-utilities\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.401748 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-utilities\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.401814 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:14.90179954 +0000 UTC m=+128.550707848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.402008 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-catalog-content\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.508355 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.508823 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:15.008808036 +0000 UTC m=+128.657716344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.509210 5004 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/redhat-marketplace-fkpfb" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.509266 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.612135 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.612414 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:15.112391692 +0000 UTC m=+128.761300000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.678708 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.679678 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.709288 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.796272 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.796924 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:15.296909027 +0000 UTC m=+128.945817335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.850320 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.857560 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9spd7\" (UniqueName: \"kubernetes.io/projected/a4169fd9-a66b-4a3f-beca-26641d59434b-kube-api-access-9spd7\") pod \"redhat-operators-t7l7m\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.895950 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:53:14 crc kubenswrapper[5004]: I1208 18:53:14.908748 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.909288 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:15.409270915 +0000 UTC m=+129.058179223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.923503 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:14 crc kubenswrapper[5004]: E1208 18:53:14.923652 5004 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.011879 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.029551 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:15 crc kubenswrapper[5004]: E1208 18:53:15.030016 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:15.529999021 +0000 UTC m=+129.178907329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.055767 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.295799 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:15 crc kubenswrapper[5004]: E1208 18:53:15.296439 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:15.796412836 +0000 UTC m=+129.445321144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.319649 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:15 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:15 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:15 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.319731 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.347995 5004 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-nx2nz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]log ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]etcd ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/max-in-flight-filter ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 18:53:15 crc kubenswrapper[5004]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 18:53:15 crc kubenswrapper[5004]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-startinformers ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 18:53:15 crc kubenswrapper[5004]: livez check failed Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.348121 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" podUID="974ef9b5-cdf4-470e-8df3-f132304df404" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.406210 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae21546-cf69-455d-bfd3-c25a9217e240-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"3ae21546-cf69-455d-bfd3-c25a9217e240\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.406271 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae21546-cf69-455d-bfd3-c25a9217e240-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"3ae21546-cf69-455d-bfd3-c25a9217e240\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.406344 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:15 crc kubenswrapper[5004]: E1208 18:53:15.406697 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:15.906680167 +0000 UTC m=+129.555588475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.519717 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.519930 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae21546-cf69-455d-bfd3-c25a9217e240-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"3ae21546-cf69-455d-bfd3-c25a9217e240\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.519972 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae21546-cf69-455d-bfd3-c25a9217e240-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"3ae21546-cf69-455d-bfd3-c25a9217e240\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:15 crc kubenswrapper[5004]: E1208 18:53:15.520615 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.020590395 +0000 UTC m=+129.669498733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.520657 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae21546-cf69-455d-bfd3-c25a9217e240-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"3ae21546-cf69-455d-bfd3-c25a9217e240\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.536758 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.537172 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.552208 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.586351 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.586448 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.622824 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:15 crc kubenswrapper[5004]: E1208 18:53:15.623849 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.12382914 +0000 UTC m=+129.772737448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.737819 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:15 crc kubenswrapper[5004]: E1208 18:53:15.738329 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.238299475 +0000 UTC m=+129.887207793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.748152 5004 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-nx2nz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]log ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]etcd ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/max-in-flight-filter ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 18:53:15 crc kubenswrapper[5004]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 18:53:15 crc kubenswrapper[5004]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-startinformers ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 18:53:15 crc kubenswrapper[5004]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 18:53:15 crc kubenswrapper[5004]: livez check failed Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.748248 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" podUID="974ef9b5-cdf4-470e-8df3-f132304df404" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.912647 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:15 crc kubenswrapper[5004]: E1208 18:53:15.913064 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.413047817 +0000 UTC m=+130.061956125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.951793 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae21546-cf69-455d-bfd3-c25a9217e240-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"3ae21546-cf69-455d-bfd3-c25a9217e240\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:15 crc kubenswrapper[5004]: I1208 18:53:15.988170 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.018366 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.018716 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.518682739 +0000 UTC m=+130.167591047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.068245 5004 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-wqg6t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded" start-of-body= Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.068749 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.068835 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.069590 5004 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"60e34d6e132bcae4faba1b0e259a9e37401c31b29ce14400fdd829d8116d6140"} pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.069673 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" containerID="cri-o://60e34d6e132bcae4faba1b0e259a9e37401c31b29ce14400fdd829d8116d6140" gracePeriod=30 Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.093340 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:16 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:16 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:16 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.093418 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.115256 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-scjp4"] Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.119980 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.120351 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.620338873 +0000 UTC m=+130.269247171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.232787 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhs6h"] Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.259483 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.260138 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.760106101 +0000 UTC m=+130.409014409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.272497 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v879b"] Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.312488 5004 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-wqg6t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.312580 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.318867 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.320641 5004 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-wqg6t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.320746 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" podUID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.362271 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.363680 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.863663657 +0000 UTC m=+130.512571965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.477450 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.477855 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rg666"] Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.478604 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.978576557 +0000 UTC m=+130.627484865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.562525 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v879b" event={"ID":"aab8b6c5-e160-4589-b8d8-34647c504c26","Type":"ContainerStarted","Data":"5ada913b41cec63c2cc080586519f21e385c4f2f123fa4c1c96fdc680db2fd76"} Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.579091 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scjp4" event={"ID":"c2312b49-de56-41e9-b8cd-8786f68696b7","Type":"ContainerStarted","Data":"409846b07dc062323baa00666d71f5efb160a2883d862a53b9151f30b7c484a4"} Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.581365 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.581818 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.081799191 +0000 UTC m=+130.730707499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.594139 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhs6h" event={"ID":"35ec334c-b741-473a-93e8-a588e1102c6a","Type":"ContainerStarted","Data":"0e4e61fdaebb417dbeafca951b1e261b31c2066a516f59b556957a9214eaf07c"} Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.610600 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-wqg6t_5d3eaa17-c643-4536-88a0-a76854e545ab/openshift-config-operator/0.log" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.622265 5004 generic.go:358] "Generic (PLEG): container finished" podID="5d3eaa17-c643-4536-88a0-a76854e545ab" containerID="60e34d6e132bcae4faba1b0e259a9e37401c31b29ce14400fdd829d8116d6140" exitCode=255 Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.622355 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" event={"ID":"5d3eaa17-c643-4536-88a0-a76854e545ab","Type":"ContainerDied","Data":"60e34d6e132bcae4faba1b0e259a9e37401c31b29ce14400fdd829d8116d6140"} Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.690357 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.690785 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.19075908 +0000 UTC m=+130.839667388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.791790 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.792312 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.292293951 +0000 UTC m=+130.941202259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.792663 5004 patch_prober.go:28] interesting pod/console-64d44f6ddf-t7lx4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.792740 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-t7lx4" podUID="b2c5e9e8-9b38-40fe-89fa-34d128ee718c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.893973 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.894232 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.394190122 +0000 UTC m=+131.043098440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.894779 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.897605 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.397587501 +0000 UTC m=+131.046495819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[5004]: I1208 18:53:16.995929 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:16 crc kubenswrapper[5004]: E1208 18:53:16.996462 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.496439336 +0000 UTC m=+131.145347644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.042339 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:17 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:17 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:17 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.042831 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.097958 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.098426 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.598410201 +0000 UTC m=+131.247318509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.108683 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.128160 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dwxjt" Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.199790 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.200988 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.700967454 +0000 UTC m=+131.349875762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.305737 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.306176 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.806162961 +0000 UTC m=+131.455071269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.372275 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7l7m"] Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.421969 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.422396 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:17.922370673 +0000 UTC m=+131.571278981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.484140 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h9jcq"] Dec 08 18:53:17 crc kubenswrapper[5004]: W1208 18:53:17.500706 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0196edda_a1e0_4e11_b84d_15988bdf3507.slice/crio-2d51286e00543c1cadbeec01393fb7a84a118f47ad65372f0509a6de51b8d665 WatchSource:0}: Error finding container 2d51286e00543c1cadbeec01393fb7a84a118f47ad65372f0509a6de51b8d665: Status 404 returned error can't find the container with id 2d51286e00543c1cadbeec01393fb7a84a118f47ad65372f0509a6de51b8d665 Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.526348 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.526894 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.026855358 +0000 UTC m=+131.675763666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.631821 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.632113 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.132096417 +0000 UTC m=+131.781004725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.666710 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg666" event={"ID":"a334e99e-c733-444f-909c-978afa75eea2","Type":"ContainerStarted","Data":"61516a3fc0ea5c9b0195a2194672d6ec8a8bf59f9441548cbd5ed7396f5a6381"} Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.684236 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-chbws" Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.691141 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v879b" event={"ID":"aab8b6c5-e160-4589-b8d8-34647c504c26","Type":"ContainerStarted","Data":"43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f"} Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.700450 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7l7m" event={"ID":"a4169fd9-a66b-4a3f-beca-26641d59434b","Type":"ContainerStarted","Data":"900221d5f1f170df7193461c9ff385ce1f247bf81db19ae67c273d163052d0c9"} Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.703716 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jcq" event={"ID":"0196edda-a1e0-4e11-b84d-15988bdf3507","Type":"ContainerStarted","Data":"2d51286e00543c1cadbeec01393fb7a84a118f47ad65372f0509a6de51b8d665"} Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.715668 5004 generic.go:358] "Generic (PLEG): container finished" podID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerID="4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548" exitCode=0 Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.717220 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scjp4" event={"ID":"c2312b49-de56-41e9-b8cd-8786f68696b7","Type":"ContainerDied","Data":"4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548"} Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.738153 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.740082 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.240053554 +0000 UTC m=+131.888961862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.838932 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.840096 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.340054825 +0000 UTC m=+131.988963143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.853716 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lt66j"] Dec 08 18:53:17 crc kubenswrapper[5004]: I1208 18:53:17.943005 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:17 crc kubenswrapper[5004]: E1208 18:53:17.943991 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.443951682 +0000 UTC m=+132.092859990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.035613 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:18 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:18 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:18 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.035691 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.044760 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.045015 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.544998295 +0000 UTC m=+132.193906603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.102151 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkpfb"] Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.104851 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.145806 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.146406 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.646390011 +0000 UTC m=+132.295298309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.246662 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.247111 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.747092755 +0000 UTC m=+132.396001063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.304786 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.347964 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.348574 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.848558813 +0000 UTC m=+132.497467121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.368531 5004 ???:1] "http: TLS handshake error from 192.168.126.11:44510: no serving certificate available for the kubelet" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.451931 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.452500 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:18.95247108 +0000 UTC m=+132.601379388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.505782 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.506852 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.516946 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.517328 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.553290 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.553352 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.553390 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.553674 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:19.05366195 +0000 UTC m=+132.702570258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.654888 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.655134 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.655201 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.655306 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.655382 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:19.155365195 +0000 UTC m=+132.804273503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.758962 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.759776 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:19.259761848 +0000 UTC m=+132.908670156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.802619 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.828255 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.863207 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.863509 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:19.363490048 +0000 UTC m=+133.012398356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.886487 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-wqg6t_5d3eaa17-c643-4536-88a0-a76854e545ab/openshift-config-operator/0.log" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.895552 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" event={"ID":"5d3eaa17-c643-4536-88a0-a76854e545ab","Type":"ContainerStarted","Data":"a2f6dfd871c44af7d4d2ba2677d2c008f16f3ef5933e70ede3793c30b89cd2b3"} Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.897117 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.906324 5004 generic.go:358] "Generic (PLEG): container finished" podID="a334e99e-c733-444f-909c-978afa75eea2" containerID="31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571" exitCode=0 Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.906450 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg666" event={"ID":"a334e99e-c733-444f-909c-978afa75eea2","Type":"ContainerDied","Data":"31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571"} Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.945681 5004 generic.go:358] "Generic (PLEG): container finished" podID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerID="43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f" exitCode=0 Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.945821 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v879b" event={"ID":"aab8b6c5-e160-4589-b8d8-34647c504c26","Type":"ContainerDied","Data":"43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f"} Dec 08 18:53:18 crc kubenswrapper[5004]: I1208 18:53:18.967121 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:18 crc kubenswrapper[5004]: E1208 18:53:18.968593 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:19.468577873 +0000 UTC m=+133.117486181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.097533 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7l7m" event={"ID":"a4169fd9-a66b-4a3f-beca-26641d59434b","Type":"ContainerStarted","Data":"a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9"} Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.101777 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:19 crc kubenswrapper[5004]: E1208 18:53:19.102122 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:19.60209223 +0000 UTC m=+133.251000538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.115100 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:19 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:19 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:19 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.115194 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.331019 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:19 crc kubenswrapper[5004]: E1208 18:53:19.331681 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:19.831665682 +0000 UTC m=+133.480574000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.433383 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:19 crc kubenswrapper[5004]: E1208 18:53:19.433814 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:19.933790571 +0000 UTC m=+133.582698879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.441679 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lt66j" event={"ID":"1bb3b4ef-469e-4926-a259-48411ff90d77","Type":"ContainerStarted","Data":"1601813691b3100712ca88ede80428f4a147c3b0da5fdbab1268acd9c7fbd6bf"} Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.444883 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkpfb" event={"ID":"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5","Type":"ContainerStarted","Data":"dcfdd7c9fc694a94d908cfe78c726e40d3197927b79e1c1636c370e69010bf26"} Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.470871 5004 generic.go:358] "Generic (PLEG): container finished" podID="35ec334c-b741-473a-93e8-a588e1102c6a" containerID="8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502" exitCode=0 Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.470977 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhs6h" event={"ID":"35ec334c-b741-473a-93e8-a588e1102c6a","Type":"ContainerDied","Data":"8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502"} Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.497192 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.497825 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"3ae21546-cf69-455d-bfd3-c25a9217e240","Type":"ContainerStarted","Data":"30bc600f7290c34b9d8e4efc824d1edcb93ba4852f2b8f6ec60429fb65666e7c"} Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.536178 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:19 crc kubenswrapper[5004]: E1208 18:53:19.537953 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.037936735 +0000 UTC m=+133.686845043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.639238 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:19 crc kubenswrapper[5004]: E1208 18:53:19.640206 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.140188669 +0000 UTC m=+133.789096977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.838091 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:19 crc kubenswrapper[5004]: E1208 18:53:19.838469 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.338453686 +0000 UTC m=+133.987361994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.939277 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:19 crc kubenswrapper[5004]: E1208 18:53:19.939563 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.439530511 +0000 UTC m=+134.088438819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:19 crc kubenswrapper[5004]: I1208 18:53:19.940346 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:19 crc kubenswrapper[5004]: E1208 18:53:19.940978 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.440957107 +0000 UTC m=+134.089865415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.037578 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:20 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:20 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:20 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.037712 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.042522 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.042969 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.542929552 +0000 UTC m=+134.191837870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.144966 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.145438 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.645416852 +0000 UTC m=+134.294325160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.221294 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.246425 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.246860 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.746812508 +0000 UTC m=+134.395720826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.349813 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.352099 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:20.852064247 +0000 UTC m=+134.500972545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.502050 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.502437 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.002414076 +0000 UTC m=+134.651322384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.537419 5004 generic.go:358] "Generic (PLEG): container finished" podID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerID="a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9" exitCode=0 Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.537528 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7l7m" event={"ID":"a4169fd9-a66b-4a3f-beca-26641d59434b","Type":"ContainerDied","Data":"a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9"} Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.555031 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.555113 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.557674 5004 generic.go:358] "Generic (PLEG): container finished" podID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerID="723863c18e134532db94e3334c1c79368c0a190350b349290c79a311890dc2e8" exitCode=0 Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.557795 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jcq" event={"ID":"0196edda-a1e0-4e11-b84d-15988bdf3507","Type":"ContainerDied","Data":"723863c18e134532db94e3334c1c79368c0a190350b349290c79a311890dc2e8"} Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.566959 5004 generic.go:358] "Generic (PLEG): container finished" podID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerID="5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9" exitCode=0 Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.567250 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lt66j" event={"ID":"1bb3b4ef-469e-4926-a259-48411ff90d77","Type":"ContainerDied","Data":"5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9"} Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.570714 5004 generic.go:358] "Generic (PLEG): container finished" podID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerID="ed5bae79999b728e5a0375c22a0e30fbc17318f3a89906afc44c18a5b31f208c" exitCode=0 Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.570805 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkpfb" event={"ID":"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5","Type":"ContainerDied","Data":"ed5bae79999b728e5a0375c22a0e30fbc17318f3a89906afc44c18a5b31f208c"} Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.603353 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.603668 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.103654096 +0000 UTC m=+134.752562404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.636039 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"3ae21546-cf69-455d-bfd3-c25a9217e240","Type":"ContainerStarted","Data":"1d78146a25fb31286ee7deebc767435f5f597d773963e35d98ac58930ed8c280"} Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.668629 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d","Type":"ContainerStarted","Data":"f878b6bbe41f141528ab99e18a3a947d86239adccc38733cc194867d9cadc696"} Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.704605 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.705535 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.205519307 +0000 UTC m=+134.854427605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.741734 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.752458 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-nx2nz" Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.806390 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.810864 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.310821348 +0000 UTC m=+134.959729656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.829760 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=6.829736457 podStartE2EDuration="6.829736457s" podCreationTimestamp="2025-12-08 18:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:20.684152572 +0000 UTC m=+134.333060910" watchObservedRunningTime="2025-12-08 18:53:20.829736457 +0000 UTC m=+134.478644785" Dec 08 18:53:20 crc kubenswrapper[5004]: I1208 18:53:20.918890 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:20 crc kubenswrapper[5004]: E1208 18:53:20.919267 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.419244911 +0000 UTC m=+135.068153219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.021289 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.021879 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.521856775 +0000 UTC m=+135.170765083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.029904 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:21 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:21 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:21 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.030019 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.123341 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.123395 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.623369105 +0000 UTC m=+135.272277413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.123881 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.124465 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.624440829 +0000 UTC m=+135.273349137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.225280 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.225442 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.725410811 +0000 UTC m=+135.374319119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.225820 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.226688 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.726667542 +0000 UTC m=+135.375575850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.327785 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.328571 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:21.828506152 +0000 UTC m=+135.477414490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.529272 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.530701 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.030681004 +0000 UTC m=+135.679589312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.697033 5004 generic.go:358] "Generic (PLEG): container finished" podID="3ae21546-cf69-455d-bfd3-c25a9217e240" containerID="1d78146a25fb31286ee7deebc767435f5f597d773963e35d98ac58930ed8c280" exitCode=0 Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.697260 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"3ae21546-cf69-455d-bfd3-c25a9217e240","Type":"ContainerDied","Data":"1d78146a25fb31286ee7deebc767435f5f597d773963e35d98ac58930ed8c280"} Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.713707 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.714173 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.214156966 +0000 UTC m=+135.863065274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.728525 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d","Type":"ContainerStarted","Data":"1997b87c129b67596d008d71ae6ebcbd190178c88694516572b6812c8fff116d"} Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.748748 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-wqg6t" Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.823315 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.823998 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.323977472 +0000 UTC m=+135.972885780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:21 crc kubenswrapper[5004]: I1208 18:53:21.924865 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:21 crc kubenswrapper[5004]: E1208 18:53:21.926243 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.426199834 +0000 UTC m=+136.075108142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.027243 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.027638 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.527625511 +0000 UTC m=+136.176533809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.030181 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:22 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:22 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:22 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.030300 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.130145 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.130973 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.630934749 +0000 UTC m=+136.279843157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.131617 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.131996 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.631986193 +0000 UTC m=+136.280894501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.235208 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.235407 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.735374253 +0000 UTC m=+136.384282561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.235502 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.236030 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.736019703 +0000 UTC m=+136.384928011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.358345 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.358782 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.858764235 +0000 UTC m=+136.507672543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.460811 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.461189 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:22.961173263 +0000 UTC m=+136.610081571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.561864 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.562102 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.062051632 +0000 UTC m=+136.710959940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.562333 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.562773 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.062759725 +0000 UTC m=+136.711668033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.664411 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.664705 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.164677167 +0000 UTC m=+136.813585475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.665269 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.665533 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.165525305 +0000 UTC m=+136.814433613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.804814 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:22 crc kubenswrapper[5004]: E1208 18:53:22.805099 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.305011324 +0000 UTC m=+136.953919642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.862362 5004 generic.go:358] "Generic (PLEG): container finished" podID="fdbbc49a-37c4-45b0-8130-07bc71523d83" containerID="d4817fae5f449b4d6832eb666a95df30ac1f94c04b30b8b749b369925de36534" exitCode=0 Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.862477 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" event={"ID":"fdbbc49a-37c4-45b0-8130-07bc71523d83","Type":"ContainerDied","Data":"d4817fae5f449b4d6832eb666a95df30ac1f94c04b30b8b749b369925de36534"} Dec 08 18:53:22 crc kubenswrapper[5004]: I1208 18:53:22.916740 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=4.9167108200000005 podStartE2EDuration="4.91671082s" podCreationTimestamp="2025-12-08 18:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:22.914922073 +0000 UTC m=+136.563830401" watchObservedRunningTime="2025-12-08 18:53:22.91671082 +0000 UTC m=+136.565619118" Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.099502 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.103165 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:23 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:23 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:23 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.103232 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.103240 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.603162038 +0000 UTC m=+137.252070496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.209276 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.210611 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.710550956 +0000 UTC m=+137.359459264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.311645 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.312351 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.812333155 +0000 UTC m=+137.461241463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.413336 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.413545 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.913512644 +0000 UTC m=+137.562420952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.414238 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.414629 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:23.914620689 +0000 UTC m=+137.563528997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.518098 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.518261 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.018232756 +0000 UTC m=+137.667141064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.518921 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.519353 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.019316171 +0000 UTC m=+137.668224479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.622314 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.622930 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.122884457 +0000 UTC m=+137.771792765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.724442 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.724835 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.2248217 +0000 UTC m=+137.873730008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.809693 5004 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.825893 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.826207 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.326153803 +0000 UTC m=+137.975062121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.826869 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.827440 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.327406374 +0000 UTC m=+137.976314822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.906547 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"3ae21546-cf69-455d-bfd3-c25a9217e240","Type":"ContainerDied","Data":"30bc600f7290c34b9d8e4efc824d1edcb93ba4852f2b8f6ec60429fb65666e7c"} Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.906624 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30bc600f7290c34b9d8e4efc824d1edcb93ba4852f2b8f6ec60429fb65666e7c" Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.907160 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.914785 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tk26l" event={"ID":"721f448b-095b-4d7f-a367-512851e5c6d6","Type":"ContainerStarted","Data":"d89c471bbd248b4f6e27418217ab511d71594760ed2e206ed74d38b381bf9a55"} Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.922411 5004 generic.go:358] "Generic (PLEG): container finished" podID="4c69bd61-9a6f-4df9-9182-7e4c2ee0645d" containerID="1997b87c129b67596d008d71ae6ebcbd190178c88694516572b6812c8fff116d" exitCode=0 Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.922816 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d","Type":"ContainerDied","Data":"1997b87c129b67596d008d71ae6ebcbd190178c88694516572b6812c8fff116d"} Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.928688 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae21546-cf69-455d-bfd3-c25a9217e240-kube-api-access\") pod \"3ae21546-cf69-455d-bfd3-c25a9217e240\" (UID: \"3ae21546-cf69-455d-bfd3-c25a9217e240\") " Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.928896 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.929015 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae21546-cf69-455d-bfd3-c25a9217e240-kubelet-dir\") pod \"3ae21546-cf69-455d-bfd3-c25a9217e240\" (UID: \"3ae21546-cf69-455d-bfd3-c25a9217e240\") " Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.929346 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ae21546-cf69-455d-bfd3-c25a9217e240-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ae21546-cf69-455d-bfd3-c25a9217e240" (UID: "3ae21546-cf69-455d-bfd3-c25a9217e240"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:53:23 crc kubenswrapper[5004]: E1208 18:53:23.929374 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.429346358 +0000 UTC m=+138.078254676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:23 crc kubenswrapper[5004]: I1208 18:53:23.945664 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae21546-cf69-455d-bfd3-c25a9217e240-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ae21546-cf69-455d-bfd3-c25a9217e240" (UID: "3ae21546-cf69-455d-bfd3-c25a9217e240"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.084970 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.085524 5004 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae21546-cf69-455d-bfd3-c25a9217e240-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.085544 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae21546-cf69-455d-bfd3-c25a9217e240-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:24 crc kubenswrapper[5004]: E1208 18:53:24.085876 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.585861734 +0000 UTC m=+138.234770042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.091292 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:24 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:24 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:24 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.091355 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.186382 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:24 crc kubenswrapper[5004]: E1208 18:53:24.186976 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.68695402 +0000 UTC m=+138.335862328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.288111 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:24 crc kubenswrapper[5004]: E1208 18:53:24.288720 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:24.788705986 +0000 UTC m=+138.437614294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-pxbdc" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.362623 5004 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T18:53:23.809729536Z","UUID":"6dca4449-5739-46ab-a97a-9c2630fa1489","Handler":null,"Name":"","Endpoint":""} Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.372356 5004 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.372417 5004 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.390185 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:24 crc kubenswrapper[5004]: E1208 18:53:24.391057 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.395108 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 18:53:24 crc kubenswrapper[5004]: E1208 18:53:24.402275 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:24 crc kubenswrapper[5004]: E1208 18:53:24.406213 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:24 crc kubenswrapper[5004]: E1208 18:53:24.406370 5004 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.492102 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.500859 5004 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.500903 5004 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.620950 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-pxbdc\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.715502 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.737927 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.802363 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdbbc49a-37c4-45b0-8130-07bc71523d83-config-volume\") pod \"fdbbc49a-37c4-45b0-8130-07bc71523d83\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.802602 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdbbc49a-37c4-45b0-8130-07bc71523d83-secret-volume\") pod \"fdbbc49a-37c4-45b0-8130-07bc71523d83\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.804693 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdbbc49a-37c4-45b0-8130-07bc71523d83-config-volume" (OuterVolumeSpecName: "config-volume") pod "fdbbc49a-37c4-45b0-8130-07bc71523d83" (UID: "fdbbc49a-37c4-45b0-8130-07bc71523d83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.805431 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x895n\" (UniqueName: \"kubernetes.io/projected/fdbbc49a-37c4-45b0-8130-07bc71523d83-kube-api-access-x895n\") pod \"fdbbc49a-37c4-45b0-8130-07bc71523d83\" (UID: \"fdbbc49a-37c4-45b0-8130-07bc71523d83\") " Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.806139 5004 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdbbc49a-37c4-45b0-8130-07bc71523d83-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.834295 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdbbc49a-37c4-45b0-8130-07bc71523d83-kube-api-access-x895n" (OuterVolumeSpecName: "kube-api-access-x895n") pod "fdbbc49a-37c4-45b0-8130-07bc71523d83" (UID: "fdbbc49a-37c4-45b0-8130-07bc71523d83"). InnerVolumeSpecName "kube-api-access-x895n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.837632 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbbc49a-37c4-45b0-8130-07bc71523d83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fdbbc49a-37c4-45b0-8130-07bc71523d83" (UID: "fdbbc49a-37c4-45b0-8130-07bc71523d83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.898429 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.906189 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.907206 5004 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdbbc49a-37c4-45b0-8130-07bc71523d83-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:24 crc kubenswrapper[5004]: I1208 18:53:24.907268 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x895n\" (UniqueName: \"kubernetes.io/projected/fdbbc49a-37c4-45b0-8130-07bc71523d83-kube-api-access-x895n\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.022232 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tk26l" event={"ID":"721f448b-095b-4d7f-a367-512851e5c6d6","Type":"ContainerStarted","Data":"1a65d9af3b9001fabd12dc2e2b0eee34fd7633bae053dc1af550904ae02f48a5"} Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.028317 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.040493 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-tglp4" event={"ID":"fdbbc49a-37c4-45b0-8130-07bc71523d83","Type":"ContainerDied","Data":"2f4f6309d064f981cb54d170a1348c8d627b7455cecb8fcccf69388a67f0bd62"} Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.040539 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4f6309d064f981cb54d170a1348c8d627b7455cecb8fcccf69388a67f0bd62" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.040668 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.059879 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:25 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:25 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:25 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.059977 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.624761 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.625252 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.625303 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.625837 5004 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"310db58d9b79248fa3df1ac237a3c152f4d1126585f3ced2f62a67403543e248"} pod="openshift-console/downloads-747b44746d-bxkfp" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.625868 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" containerID="cri-o://310db58d9b79248fa3df1ac237a3c152f4d1126585f3ced2f62a67403543e248" gracePeriod=2 Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.629032 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.629121 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.899715 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-pxbdc"] Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.919152 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:25 crc kubenswrapper[5004]: W1208 18:53:25.929540 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a6206e_5e26_43f6_aeeb_48d0c3e30780.slice/crio-c4e491a195c721e3b4f06d37f89306b7e4b8c991eda62683a8c8d9f87174afe9 WatchSource:0}: Error finding container c4e491a195c721e3b4f06d37f89306b7e4b8c991eda62683a8c8d9f87174afe9: Status 404 returned error can't find the container with id c4e491a195c721e3b4f06d37f89306b7e4b8c991eda62683a8c8d9f87174afe9 Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.933628 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kubelet-dir\") pod \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\" (UID: \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\") " Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.933943 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kube-api-access\") pod \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\" (UID: \"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d\") " Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.934818 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4c69bd61-9a6f-4df9-9182-7e4c2ee0645d" (UID: "4c69bd61-9a6f-4df9-9182-7e4c2ee0645d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:53:25 crc kubenswrapper[5004]: I1208 18:53:25.983443 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4c69bd61-9a6f-4df9-9182-7e4c2ee0645d" (UID: "4c69bd61-9a6f-4df9-9182-7e4c2ee0645d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.028791 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:26 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:26 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:26 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.028876 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.035476 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.035532 5004 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c69bd61-9a6f-4df9-9182-7e4c2ee0645d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.050292 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" event={"ID":"f9a6206e-5e26-43f6-aeeb-48d0c3e30780","Type":"ContainerStarted","Data":"c4e491a195c721e3b4f06d37f89306b7e4b8c991eda62683a8c8d9f87174afe9"} Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.060904 5004 generic.go:358] "Generic (PLEG): container finished" podID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerID="310db58d9b79248fa3df1ac237a3c152f4d1126585f3ced2f62a67403543e248" exitCode=0 Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.061025 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-bxkfp" event={"ID":"5ef4eb78-30f8-4a10-b956-a3ba6e587d53","Type":"ContainerDied","Data":"310db58d9b79248fa3df1ac237a3c152f4d1126585f3ced2f62a67403543e248"} Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.074986 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tk26l" event={"ID":"721f448b-095b-4d7f-a367-512851e5c6d6","Type":"ContainerStarted","Data":"5bcc698bcafec541f7b7d803720418d04149d9894d511e419f93bcc1461019e8"} Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.080863 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4c69bd61-9a6f-4df9-9182-7e4c2ee0645d","Type":"ContainerDied","Data":"f878b6bbe41f141528ab99e18a3a947d86239adccc38733cc194867d9cadc696"} Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.080929 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f878b6bbe41f141528ab99e18a3a947d86239adccc38733cc194867d9cadc696" Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.081685 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.361172 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tk26l" podStartSLOduration=32.361147644 podStartE2EDuration="32.361147644s" podCreationTimestamp="2025-12-08 18:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:26.356214386 +0000 UTC m=+140.005122714" watchObservedRunningTime="2025-12-08 18:53:26.361147644 +0000 UTC m=+140.010055962" Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.761055 5004 patch_prober.go:28] interesting pod/console-64d44f6ddf-t7lx4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 08 18:53:26 crc kubenswrapper[5004]: I1208 18:53:26.761296 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-t7lx4" podUID="b2c5e9e8-9b38-40fe-89fa-34d128ee718c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.029221 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:27 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:27 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:27 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.029307 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.144605 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" event={"ID":"f9a6206e-5e26-43f6-aeeb-48d0c3e30780","Type":"ContainerStarted","Data":"a7d8a8700520e896082fbafec5004aa917b9fc875cbdf664e7727b6a4bbed09e"} Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.146880 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.234094 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-bxkfp" event={"ID":"5ef4eb78-30f8-4a10-b956-a3ba6e587d53","Type":"ContainerStarted","Data":"1f907b4ebb7171a4b60b176fae5bae337b8150a95ca9722108226a811b2dff79"} Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.236011 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.236263 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.236352 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:27 crc kubenswrapper[5004]: I1208 18:53:27.304161 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" podStartSLOduration=120.304131944 podStartE2EDuration="2m0.304131944s" podCreationTimestamp="2025-12-08 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:27.243509288 +0000 UTC m=+140.892417626" watchObservedRunningTime="2025-12-08 18:53:27.304131944 +0000 UTC m=+140.953040252" Dec 08 18:53:28 crc kubenswrapper[5004]: I1208 18:53:28.030485 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:28 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:28 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:28 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:28 crc kubenswrapper[5004]: I1208 18:53:28.030655 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:28 crc kubenswrapper[5004]: I1208 18:53:28.220141 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:28 crc kubenswrapper[5004]: I1208 18:53:28.220205 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:28 crc kubenswrapper[5004]: I1208 18:53:28.661248 5004 ???:1] "http: TLS handshake error from 192.168.126.11:37294: no serving certificate available for the kubelet" Dec 08 18:53:29 crc kubenswrapper[5004]: I1208 18:53:29.030318 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:29 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:29 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:29 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:29 crc kubenswrapper[5004]: I1208 18:53:29.030398 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:30 crc kubenswrapper[5004]: I1208 18:53:30.037850 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:30 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:30 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:30 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:30 crc kubenswrapper[5004]: I1208 18:53:30.038855 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:31 crc kubenswrapper[5004]: I1208 18:53:31.038329 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:31 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:31 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:31 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:31 crc kubenswrapper[5004]: I1208 18:53:31.038394 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:32 crc kubenswrapper[5004]: I1208 18:53:32.029575 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:32 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:32 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:32 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:32 crc kubenswrapper[5004]: I1208 18:53:32.029671 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:33 crc kubenswrapper[5004]: I1208 18:53:33.029323 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:33 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:33 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:33 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:33 crc kubenswrapper[5004]: I1208 18:53:33.029864 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:34 crc kubenswrapper[5004]: I1208 18:53:34.028338 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:34 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:34 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:34 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:34 crc kubenswrapper[5004]: I1208 18:53:34.028469 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:34 crc kubenswrapper[5004]: E1208 18:53:34.396524 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:34 crc kubenswrapper[5004]: E1208 18:53:34.398461 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:34 crc kubenswrapper[5004]: E1208 18:53:34.400217 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:34 crc kubenswrapper[5004]: E1208 18:53:34.400265 5004 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:53:35 crc kubenswrapper[5004]: I1208 18:53:35.028195 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:35 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:35 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:35 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:35 crc kubenswrapper[5004]: I1208 18:53:35.028255 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:35 crc kubenswrapper[5004]: I1208 18:53:35.573406 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:35 crc kubenswrapper[5004]: I1208 18:53:35.573525 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:36 crc kubenswrapper[5004]: I1208 18:53:36.045206 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:36 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:36 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:36 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:36 crc kubenswrapper[5004]: I1208 18:53:36.045592 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:36 crc kubenswrapper[5004]: I1208 18:53:36.768715 5004 patch_prober.go:28] interesting pod/console-64d44f6ddf-t7lx4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 08 18:53:36 crc kubenswrapper[5004]: I1208 18:53:36.768813 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-t7lx4" podUID="b2c5e9e8-9b38-40fe-89fa-34d128ee718c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 08 18:53:36 crc kubenswrapper[5004]: I1208 18:53:36.788693 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mf2f2"] Dec 08 18:53:36 crc kubenswrapper[5004]: I1208 18:53:36.789113 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerName="controller-manager" containerID="cri-o://aaff36e0e11f2f014fd8a27464cb291bacd06401428bbf342241c2888e62b219" gracePeriod=30 Dec 08 18:53:36 crc kubenswrapper[5004]: I1208 18:53:36.827046 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4"] Dec 08 18:53:36 crc kubenswrapper[5004]: I1208 18:53:36.827337 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" podUID="bdf0d9fe-459a-442c-b551-ba165104b4fd" containerName="route-controller-manager" containerID="cri-o://9ee61d60f8e78cd88f1b9b9e8d05468321bcec5e3ba40bb70ec025a083738eec" gracePeriod=30 Dec 08 18:53:37 crc kubenswrapper[5004]: I1208 18:53:37.029242 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:37 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:37 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:37 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:37 crc kubenswrapper[5004]: I1208 18:53:37.029351 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:37 crc kubenswrapper[5004]: I1208 18:53:37.437527 5004 generic.go:358] "Generic (PLEG): container finished" podID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerID="aaff36e0e11f2f014fd8a27464cb291bacd06401428bbf342241c2888e62b219" exitCode=0 Dec 08 18:53:37 crc kubenswrapper[5004]: I1208 18:53:37.437631 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" event={"ID":"6455354b-74ef-4e73-9a43-c7fad7edcf61","Type":"ContainerDied","Data":"aaff36e0e11f2f014fd8a27464cb291bacd06401428bbf342241c2888e62b219"} Dec 08 18:53:37 crc kubenswrapper[5004]: I1208 18:53:37.619570 5004 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-mf2f2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 08 18:53:37 crc kubenswrapper[5004]: I1208 18:53:37.619976 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 08 18:53:38 crc kubenswrapper[5004]: I1208 18:53:38.030458 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:38 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:38 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:38 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:38 crc kubenswrapper[5004]: I1208 18:53:38.030601 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:38 crc kubenswrapper[5004]: I1208 18:53:38.220486 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:38 crc kubenswrapper[5004]: I1208 18:53:38.221295 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:38 crc kubenswrapper[5004]: I1208 18:53:38.443297 5004 generic.go:358] "Generic (PLEG): container finished" podID="bdf0d9fe-459a-442c-b551-ba165104b4fd" containerID="9ee61d60f8e78cd88f1b9b9e8d05468321bcec5e3ba40bb70ec025a083738eec" exitCode=0 Dec 08 18:53:38 crc kubenswrapper[5004]: I1208 18:53:38.443383 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" event={"ID":"bdf0d9fe-459a-442c-b551-ba165104b4fd","Type":"ContainerDied","Data":"9ee61d60f8e78cd88f1b9b9e8d05468321bcec5e3ba40bb70ec025a083738eec"} Dec 08 18:53:39 crc kubenswrapper[5004]: I1208 18:53:39.028478 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:39 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:39 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:39 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:39 crc kubenswrapper[5004]: I1208 18:53:39.028556 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:40 crc kubenswrapper[5004]: I1208 18:53:40.030989 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6p5ww" Dec 08 18:53:40 crc kubenswrapper[5004]: I1208 18:53:40.031178 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:40 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:40 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:40 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:40 crc kubenswrapper[5004]: I1208 18:53:40.031293 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:40 crc kubenswrapper[5004]: I1208 18:53:40.456932 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkxw8_a1aa164d-cf7a-4c71-90db-3488e29d60a2/kube-multus-additional-cni-plugins/0.log" Dec 08 18:53:40 crc kubenswrapper[5004]: I1208 18:53:40.456995 5004 generic.go:358] "Generic (PLEG): container finished" podID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" exitCode=137 Dec 08 18:53:40 crc kubenswrapper[5004]: I1208 18:53:40.457179 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" event={"ID":"a1aa164d-cf7a-4c71-90db-3488e29d60a2","Type":"ContainerDied","Data":"07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0"} Dec 08 18:53:41 crc kubenswrapper[5004]: I1208 18:53:41.031317 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:41 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:41 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:41 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:41 crc kubenswrapper[5004]: I1208 18:53:41.031408 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:41 crc kubenswrapper[5004]: I1208 18:53:41.816176 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:42 crc kubenswrapper[5004]: I1208 18:53:42.028030 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:42 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:42 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:42 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:42 crc kubenswrapper[5004]: I1208 18:53:42.028138 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:43 crc kubenswrapper[5004]: I1208 18:53:43.029373 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:43 crc kubenswrapper[5004]: [-]has-synced failed: reason withheld Dec 08 18:53:43 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:43 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:43 crc kubenswrapper[5004]: I1208 18:53:43.029466 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:44 crc kubenswrapper[5004]: I1208 18:53:44.028467 5004 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-h7zw2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:44 crc kubenswrapper[5004]: [+]has-synced ok Dec 08 18:53:44 crc kubenswrapper[5004]: [+]process-running ok Dec 08 18:53:44 crc kubenswrapper[5004]: healthz check failed Dec 08 18:53:44 crc kubenswrapper[5004]: I1208 18:53:44.028566 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" podUID="295410e0-8c26-494c-89b5-fee76ecf0ff4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:44 crc kubenswrapper[5004]: E1208 18:53:44.378865 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0 is running failed: container process not found" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:44 crc kubenswrapper[5004]: E1208 18:53:44.379097 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0 is running failed: container process not found" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:44 crc kubenswrapper[5004]: E1208 18:53:44.379238 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0 is running failed: container process not found" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:44 crc kubenswrapper[5004]: E1208 18:53:44.379269 5004 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:53:44 crc kubenswrapper[5004]: I1208 18:53:44.570944 5004 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-bmpp4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 08 18:53:44 crc kubenswrapper[5004]: I1208 18:53:44.571036 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" podUID="bdf0d9fe-459a-442c-b551-ba165104b4fd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 08 18:53:45 crc kubenswrapper[5004]: I1208 18:53:45.560195 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:53:45 crc kubenswrapper[5004]: I1208 18:53:45.565603 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-h7zw2" Dec 08 18:53:45 crc kubenswrapper[5004]: I1208 18:53:45.571544 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:45 crc kubenswrapper[5004]: I1208 18:53:45.571954 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:46 crc kubenswrapper[5004]: I1208 18:53:46.873969 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:53:46 crc kubenswrapper[5004]: I1208 18:53:46.880899 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-t7lx4" Dec 08 18:53:47 crc kubenswrapper[5004]: I1208 18:53:47.615478 5004 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-mf2f2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 08 18:53:47 crc kubenswrapper[5004]: I1208 18:53:47.615621 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 08 18:53:48 crc kubenswrapper[5004]: I1208 18:53:48.221334 5004 patch_prober.go:28] interesting pod/downloads-747b44746d-bxkfp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 08 18:53:48 crc kubenswrapper[5004]: I1208 18:53:48.221441 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bxkfp" podUID="5ef4eb78-30f8-4a10-b956-a3ba6e587d53" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.246468 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.268893 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.269915 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c69bd61-9a6f-4df9-9182-7e4c2ee0645d" containerName="pruner" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.269950 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c69bd61-9a6f-4df9-9182-7e4c2ee0645d" containerName="pruner" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.269988 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdbbc49a-37c4-45b0-8130-07bc71523d83" containerName="collect-profiles" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.269997 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbbc49a-37c4-45b0-8130-07bc71523d83" containerName="collect-profiles" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.270020 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ae21546-cf69-455d-bfd3-c25a9217e240" containerName="pruner" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.270027 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae21546-cf69-455d-bfd3-c25a9217e240" containerName="pruner" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.276273 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ae21546-cf69-455d-bfd3-c25a9217e240" containerName="pruner" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.276342 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="fdbbc49a-37c4-45b0-8130-07bc71523d83" containerName="collect-profiles" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.276370 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c69bd61-9a6f-4df9-9182-7e4c2ee0645d" containerName="pruner" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.282946 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.298098 5004 ???:1] "http: TLS handshake error from 192.168.126.11:60730: no serving certificate available for the kubelet" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.298825 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.299596 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.323644 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.445047 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bd67aa-41ac-42e0-883c-ba376f5256d1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"59bd67aa-41ac-42e0-883c-ba376f5256d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.445111 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bd67aa-41ac-42e0-883c-ba376f5256d1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"59bd67aa-41ac-42e0-883c-ba376f5256d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.622476 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bd67aa-41ac-42e0-883c-ba376f5256d1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"59bd67aa-41ac-42e0-883c-ba376f5256d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.622534 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bd67aa-41ac-42e0-883c-ba376f5256d1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"59bd67aa-41ac-42e0-883c-ba376f5256d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.623147 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bd67aa-41ac-42e0-883c-ba376f5256d1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"59bd67aa-41ac-42e0-883c-ba376f5256d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.658833 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bd67aa-41ac-42e0-883c-ba376f5256d1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"59bd67aa-41ac-42e0-883c-ba376f5256d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:53:49 crc kubenswrapper[5004]: I1208 18:53:49.933147 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: E1208 18:53:54.403685 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0 is running failed: container process not found" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:54 crc kubenswrapper[5004]: E1208 18:53:54.411746 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0 is running failed: container process not found" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:54 crc kubenswrapper[5004]: E1208 18:53:54.412374 5004 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0 is running failed: container process not found" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:54 crc kubenswrapper[5004]: E1208 18:53:54.412411 5004 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.610769 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.640650 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.640886 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.717561 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kube-api-access\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.717732 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.717810 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-var-lock\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.819044 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kube-api-access\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.819171 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.819237 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-var-lock\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.819376 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-var-lock\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.819564 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.845322 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kube-api-access\") pod \"installer-12-crc\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:54 crc kubenswrapper[5004]: I1208 18:53:54.985712 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:53:55 crc kubenswrapper[5004]: I1208 18:53:55.571179 5004 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-bmpp4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:53:55 crc kubenswrapper[5004]: I1208 18:53:55.571288 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" podUID="bdf0d9fe-459a-442c-b551-ba165104b4fd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 18:53:58 crc kubenswrapper[5004]: I1208 18:53:58.400510 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-bxkfp" Dec 08 18:53:58 crc kubenswrapper[5004]: I1208 18:53:58.626859 5004 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-mf2f2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:53:58 crc kubenswrapper[5004]: I1208 18:53:58.626975 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.350833 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.371034 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.388197 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v"] Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.388973 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdf0d9fe-459a-442c-b551-ba165104b4fd" containerName="route-controller-manager" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.388995 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf0d9fe-459a-442c-b551-ba165104b4fd" containerName="route-controller-manager" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.389010 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerName="controller-manager" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.389017 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerName="controller-manager" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.389399 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" containerName="controller-manager" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.389427 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="bdf0d9fe-459a-442c-b551-ba165104b4fd" containerName="route-controller-manager" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.404106 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.408871 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v"] Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.409572 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fldkd\" (UniqueName: \"kubernetes.io/projected/6455354b-74ef-4e73-9a43-c7fad7edcf61-kube-api-access-fldkd\") pod \"6455354b-74ef-4e73-9a43-c7fad7edcf61\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.409679 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-client-ca\") pod \"6455354b-74ef-4e73-9a43-c7fad7edcf61\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.409732 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf0d9fe-459a-442c-b551-ba165104b4fd-serving-cert\") pod \"bdf0d9fe-459a-442c-b551-ba165104b4fd\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.409851 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6455354b-74ef-4e73-9a43-c7fad7edcf61-serving-cert\") pod \"6455354b-74ef-4e73-9a43-c7fad7edcf61\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.409928 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-client-ca\") pod \"bdf0d9fe-459a-442c-b551-ba165104b4fd\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.409985 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-proxy-ca-bundles\") pod \"6455354b-74ef-4e73-9a43-c7fad7edcf61\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.410310 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6455354b-74ef-4e73-9a43-c7fad7edcf61-tmp\") pod \"6455354b-74ef-4e73-9a43-c7fad7edcf61\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.410397 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-config\") pod \"6455354b-74ef-4e73-9a43-c7fad7edcf61\" (UID: \"6455354b-74ef-4e73-9a43-c7fad7edcf61\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.410478 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwg7p\" (UniqueName: \"kubernetes.io/projected/bdf0d9fe-459a-442c-b551-ba165104b4fd-kube-api-access-kwg7p\") pod \"bdf0d9fe-459a-442c-b551-ba165104b4fd\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.412643 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6455354b-74ef-4e73-9a43-c7fad7edcf61" (UID: "6455354b-74ef-4e73-9a43-c7fad7edcf61"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.412909 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-client-ca" (OuterVolumeSpecName: "client-ca") pod "6455354b-74ef-4e73-9a43-c7fad7edcf61" (UID: "6455354b-74ef-4e73-9a43-c7fad7edcf61"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.413197 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6455354b-74ef-4e73-9a43-c7fad7edcf61-tmp" (OuterVolumeSpecName: "tmp") pod "6455354b-74ef-4e73-9a43-c7fad7edcf61" (UID: "6455354b-74ef-4e73-9a43-c7fad7edcf61"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.413609 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "bdf0d9fe-459a-442c-b551-ba165104b4fd" (UID: "bdf0d9fe-459a-442c-b551-ba165104b4fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.414262 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-config" (OuterVolumeSpecName: "config") pod "6455354b-74ef-4e73-9a43-c7fad7edcf61" (UID: "6455354b-74ef-4e73-9a43-c7fad7edcf61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.418064 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-config\") pod \"bdf0d9fe-459a-442c-b551-ba165104b4fd\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.418227 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d9fe-459a-442c-b551-ba165104b4fd-tmp\") pod \"bdf0d9fe-459a-442c-b551-ba165104b4fd\" (UID: \"bdf0d9fe-459a-442c-b551-ba165104b4fd\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.418969 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.418984 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.419006 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.419022 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6455354b-74ef-4e73-9a43-c7fad7edcf61-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.419030 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6455354b-74ef-4e73-9a43-c7fad7edcf61-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.420545 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-config" (OuterVolumeSpecName: "config") pod "bdf0d9fe-459a-442c-b551-ba165104b4fd" (UID: "bdf0d9fe-459a-442c-b551-ba165104b4fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.433017 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf0d9fe-459a-442c-b551-ba165104b4fd-tmp" (OuterVolumeSpecName: "tmp") pod "bdf0d9fe-459a-442c-b551-ba165104b4fd" (UID: "bdf0d9fe-459a-442c-b551-ba165104b4fd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.434223 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf0d9fe-459a-442c-b551-ba165104b4fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bdf0d9fe-459a-442c-b551-ba165104b4fd" (UID: "bdf0d9fe-459a-442c-b551-ba165104b4fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.437627 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8"] Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.442517 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6455354b-74ef-4e73-9a43-c7fad7edcf61-kube-api-access-fldkd" (OuterVolumeSpecName: "kube-api-access-fldkd") pod "6455354b-74ef-4e73-9a43-c7fad7edcf61" (UID: "6455354b-74ef-4e73-9a43-c7fad7edcf61"). InnerVolumeSpecName "kube-api-access-fldkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.443211 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6455354b-74ef-4e73-9a43-c7fad7edcf61-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6455354b-74ef-4e73-9a43-c7fad7edcf61" (UID: "6455354b-74ef-4e73-9a43-c7fad7edcf61"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.449638 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.456674 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf0d9fe-459a-442c-b551-ba165104b4fd-kube-api-access-kwg7p" (OuterVolumeSpecName: "kube-api-access-kwg7p") pod "bdf0d9fe-459a-442c-b551-ba165104b4fd" (UID: "bdf0d9fe-459a-442c-b551-ba165104b4fd"). InnerVolumeSpecName "kube-api-access-kwg7p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.473379 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8"] Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.520862 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-serving-cert\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.520927 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-config\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.520956 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/efadb339-01d5-42c1-ba13-15d4c2b97b2d-tmp\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.520975 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdpf\" (UniqueName: \"kubernetes.io/projected/efadb339-01d5-42c1-ba13-15d4c2b97b2d-kube-api-access-mqdpf\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.520998 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efadb339-01d5-42c1-ba13-15d4c2b97b2d-serving-cert\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521047 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-tmp\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521135 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-config\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521161 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzfm\" (UniqueName: \"kubernetes.io/projected/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-kube-api-access-xfzfm\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521204 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-client-ca\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521227 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-proxy-ca-bundles\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521252 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-client-ca\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521296 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kwg7p\" (UniqueName: \"kubernetes.io/projected/bdf0d9fe-459a-442c-b551-ba165104b4fd-kube-api-access-kwg7p\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521307 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf0d9fe-459a-442c-b551-ba165104b4fd-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521317 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d9fe-459a-442c-b551-ba165104b4fd-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521326 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fldkd\" (UniqueName: \"kubernetes.io/projected/6455354b-74ef-4e73-9a43-c7fad7edcf61-kube-api-access-fldkd\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521334 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf0d9fe-459a-442c-b551-ba165104b4fd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.521344 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6455354b-74ef-4e73-9a43-c7fad7edcf61-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.534754 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkxw8_a1aa164d-cf7a-4c71-90db-3488e29d60a2/kube-multus-additional-cni-plugins/0.log" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.534866 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.622082 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1aa164d-cf7a-4c71-90db-3488e29d60a2-tuning-conf-dir\") pod \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.622237 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a1aa164d-cf7a-4c71-90db-3488e29d60a2-ready\") pod \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.622290 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1aa164d-cf7a-4c71-90db-3488e29d60a2-cni-sysctl-allowlist\") pod \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.622318 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpkch\" (UniqueName: \"kubernetes.io/projected/a1aa164d-cf7a-4c71-90db-3488e29d60a2-kube-api-access-fpkch\") pod \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\" (UID: \"a1aa164d-cf7a-4c71-90db-3488e29d60a2\") " Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.623184 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1aa164d-cf7a-4c71-90db-3488e29d60a2-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "a1aa164d-cf7a-4c71-90db-3488e29d60a2" (UID: "a1aa164d-cf7a-4c71-90db-3488e29d60a2"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.624417 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1aa164d-cf7a-4c71-90db-3488e29d60a2-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "a1aa164d-cf7a-4c71-90db-3488e29d60a2" (UID: "a1aa164d-cf7a-4c71-90db-3488e29d60a2"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.624510 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-client-ca\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.624627 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-proxy-ca-bundles\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.625307 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-client-ca\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.625450 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-serving-cert\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.625549 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-config\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.625809 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1aa164d-cf7a-4c71-90db-3488e29d60a2-ready" (OuterVolumeSpecName: "ready") pod "a1aa164d-cf7a-4c71-90db-3488e29d60a2" (UID: "a1aa164d-cf7a-4c71-90db-3488e29d60a2"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.626295 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/efadb339-01d5-42c1-ba13-15d4c2b97b2d-tmp\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.626346 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdpf\" (UniqueName: \"kubernetes.io/projected/efadb339-01d5-42c1-ba13-15d4c2b97b2d-kube-api-access-mqdpf\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.626381 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efadb339-01d5-42c1-ba13-15d4c2b97b2d-serving-cert\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.626488 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-tmp\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.626639 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-config\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.626678 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfzfm\" (UniqueName: \"kubernetes.io/projected/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-kube-api-access-xfzfm\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.628906 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-client-ca\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.630318 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-proxy-ca-bundles\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.630506 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-client-ca\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.630582 5004 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1aa164d-cf7a-4c71-90db-3488e29d60a2-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.630675 5004 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a1aa164d-cf7a-4c71-90db-3488e29d60a2-ready\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.630695 5004 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1aa164d-cf7a-4c71-90db-3488e29d60a2-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.632053 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/efadb339-01d5-42c1-ba13-15d4c2b97b2d-tmp\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.633736 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-config\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.636795 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-config\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.639834 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-tmp\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.646727 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-serving-cert\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.647271 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efadb339-01d5-42c1-ba13-15d4c2b97b2d-serving-cert\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.651697 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1aa164d-cf7a-4c71-90db-3488e29d60a2-kube-api-access-fpkch" (OuterVolumeSpecName: "kube-api-access-fpkch") pod "a1aa164d-cf7a-4c71-90db-3488e29d60a2" (UID: "a1aa164d-cf7a-4c71-90db-3488e29d60a2"). InnerVolumeSpecName "kube-api-access-fpkch". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.656609 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdpf\" (UniqueName: \"kubernetes.io/projected/efadb339-01d5-42c1-ba13-15d4c2b97b2d-kube-api-access-mqdpf\") pod \"controller-manager-6cbdf9cf55-bmwj8\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.656740 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfzfm\" (UniqueName: \"kubernetes.io/projected/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-kube-api-access-xfzfm\") pod \"route-controller-manager-7474799fdc-lbf7v\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.770957 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fpkch\" (UniqueName: \"kubernetes.io/projected/a1aa164d-cf7a-4c71-90db-3488e29d60a2-kube-api-access-fpkch\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.801656 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.844411 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.985381 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" event={"ID":"bdf0d9fe-459a-442c-b551-ba165104b4fd","Type":"ContainerDied","Data":"c55b6f72cd2e6e2c5bcbb01eca3a8772c88dae7e8f7354a0ba11a0d40039b57c"} Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.986408 5004 scope.go:117] "RemoveContainer" containerID="9ee61d60f8e78cd88f1b9b9e8d05468321bcec5e3ba40bb70ec025a083738eec" Dec 08 18:54:00 crc kubenswrapper[5004]: I1208 18:54:00.986948 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4" Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:00.998476 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:00.999438 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-mf2f2" event={"ID":"6455354b-74ef-4e73-9a43-c7fad7edcf61","Type":"ContainerDied","Data":"905720e7b3fd4e442907ffa113f9e2ca42d46722156c58cda6a35b14d38cac15"} Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.016616 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkxw8_a1aa164d-cf7a-4c71-90db-3488e29d60a2/kube-multus-additional-cni-plugins/0.log" Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.021464 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" event={"ID":"a1aa164d-cf7a-4c71-90db-3488e29d60a2","Type":"ContainerDied","Data":"17c94a33c2a825e5ad1ffee48b6350c8c8e5aad6dc1aa4a3596bd7e4960893ab"} Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.021674 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkxw8" Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.047537 5004 scope.go:117] "RemoveContainer" containerID="aaff36e0e11f2f014fd8a27464cb291bacd06401428bbf342241c2888e62b219" Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.069288 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4"] Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.078958 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-bmpp4"] Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.099470 5004 scope.go:117] "RemoveContainer" containerID="07c3106c5d246db028e5090c4c578ca5a75c3d5adeb2be3613bd502c51e4fcf0" Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.202283 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mf2f2"] Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.202331 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mf2f2"] Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.202355 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkxw8"] Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.202374 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkxw8"] Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.229700 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.246563 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 18:54:01 crc kubenswrapper[5004]: W1208 18:54:01.261233 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podffcdf638_173d_4a35_9fb6_01cb9844af6a.slice/crio-9f4ac44dc530636dfb371ef7124334446639ee53c5778fdec9adfe287d6c0958 WatchSource:0}: Error finding container 9f4ac44dc530636dfb371ef7124334446639ee53c5778fdec9adfe287d6c0958: Status 404 returned error can't find the container with id 9f4ac44dc530636dfb371ef7124334446639ee53c5778fdec9adfe287d6c0958 Dec 08 18:54:01 crc kubenswrapper[5004]: W1208 18:54:01.288541 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod59bd67aa_41ac_42e0_883c_ba376f5256d1.slice/crio-dce788ca589adc51af5ca31325b80af3bb17a7e5ae4be950000500a3a96b9187 WatchSource:0}: Error finding container dce788ca589adc51af5ca31325b80af3bb17a7e5ae4be950000500a3a96b9187: Status 404 returned error can't find the container with id dce788ca589adc51af5ca31325b80af3bb17a7e5ae4be950000500a3a96b9187 Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.346103 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v"] Dec 08 18:54:01 crc kubenswrapper[5004]: I1208 18:54:01.380801 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8"] Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.047442 5004 generic.go:358] "Generic (PLEG): container finished" podID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerID="b979d09f7b710d35c90943870fef962ff466efc2b234ec086b69817bfc0525e4" exitCode=0 Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.048713 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkpfb" event={"ID":"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5","Type":"ContainerDied","Data":"b979d09f7b710d35c90943870fef962ff466efc2b234ec086b69817bfc0525e4"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.061238 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"ffcdf638-173d-4a35-9fb6-01cb9844af6a","Type":"ContainerStarted","Data":"9f4ac44dc530636dfb371ef7124334446639ee53c5778fdec9adfe287d6c0958"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.078335 5004 generic.go:358] "Generic (PLEG): container finished" podID="35ec334c-b741-473a-93e8-a588e1102c6a" containerID="9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140" exitCode=0 Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.078479 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhs6h" event={"ID":"35ec334c-b741-473a-93e8-a588e1102c6a","Type":"ContainerDied","Data":"9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.081836 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" event={"ID":"0e8a8547-6e06-49fc-846c-47f77b4ad6c8","Type":"ContainerStarted","Data":"eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.081862 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" event={"ID":"0e8a8547-6e06-49fc-846c-47f77b4ad6c8","Type":"ContainerStarted","Data":"a5dfee185d5c43f8df6b7158275979e63e1c76d500d60fc10481254217d14f9e"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.082503 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.091885 5004 patch_prober.go:28] interesting pod/route-controller-manager-7474799fdc-lbf7v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.091952 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" podUID="0e8a8547-6e06-49fc-846c-47f77b4ad6c8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.114589 5004 generic.go:358] "Generic (PLEG): container finished" podID="a334e99e-c733-444f-909c-978afa75eea2" containerID="b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff" exitCode=0 Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.114873 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg666" event={"ID":"a334e99e-c733-444f-909c-978afa75eea2","Type":"ContainerDied","Data":"b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.133150 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"59bd67aa-41ac-42e0-883c-ba376f5256d1","Type":"ContainerStarted","Data":"dce788ca589adc51af5ca31325b80af3bb17a7e5ae4be950000500a3a96b9187"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.138956 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" podStartSLOduration=6.138926335 podStartE2EDuration="6.138926335s" podCreationTimestamp="2025-12-08 18:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:02.137570342 +0000 UTC m=+175.786478660" watchObservedRunningTime="2025-12-08 18:54:02.138926335 +0000 UTC m=+175.787834643" Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.150082 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" event={"ID":"efadb339-01d5-42c1-ba13-15d4c2b97b2d","Type":"ContainerStarted","Data":"7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.151028 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" event={"ID":"efadb339-01d5-42c1-ba13-15d4c2b97b2d","Type":"ContainerStarted","Data":"bda165f45b8d922b3c5810467d1685316ffad979f0e3c64a402e7b02fb87a094"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.154510 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.165457 5004 generic.go:358] "Generic (PLEG): container finished" podID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerID="fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2" exitCode=0 Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.165584 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v879b" event={"ID":"aab8b6c5-e160-4589-b8d8-34647c504c26","Type":"ContainerDied","Data":"fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.189149 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7l7m" event={"ID":"a4169fd9-a66b-4a3f-beca-26641d59434b","Type":"ContainerStarted","Data":"32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.191671 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jcq" event={"ID":"0196edda-a1e0-4e11-b84d-15988bdf3507","Type":"ContainerStarted","Data":"0f378cc54c0b4e311d437fddf4e6103425635ed11a5f7f6a821741831915e028"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.195340 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scjp4" event={"ID":"c2312b49-de56-41e9-b8cd-8786f68696b7","Type":"ContainerStarted","Data":"cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.206184 5004 generic.go:358] "Generic (PLEG): container finished" podID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerID="3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552" exitCode=0 Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.206316 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lt66j" event={"ID":"1bb3b4ef-469e-4926-a259-48411ff90d77","Type":"ContainerDied","Data":"3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552"} Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.228715 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" podStartSLOduration=6.228689088 podStartE2EDuration="6.228689088s" podCreationTimestamp="2025-12-08 18:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:02.224642718 +0000 UTC m=+175.873551026" watchObservedRunningTime="2025-12-08 18:54:02.228689088 +0000 UTC m=+175.877597396" Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.484521 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.776534 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6455354b-74ef-4e73-9a43-c7fad7edcf61" path="/var/lib/kubelet/pods/6455354b-74ef-4e73-9a43-c7fad7edcf61/volumes" Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.777903 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" path="/var/lib/kubelet/pods/a1aa164d-cf7a-4c71-90db-3488e29d60a2/volumes" Dec 08 18:54:02 crc kubenswrapper[5004]: I1208 18:54:02.779042 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf0d9fe-459a-442c-b551-ba165104b4fd" path="/var/lib/kubelet/pods/bdf0d9fe-459a-442c-b551-ba165104b4fd/volumes" Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.273696 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg666" event={"ID":"a334e99e-c733-444f-909c-978afa75eea2","Type":"ContainerStarted","Data":"b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd"} Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.277230 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"59bd67aa-41ac-42e0-883c-ba376f5256d1","Type":"ContainerStarted","Data":"938a205d5f0fd05d63ae557e60642501f3f939cdb6268ff960f5359e1babdbc8"} Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.291414 5004 generic.go:358] "Generic (PLEG): container finished" podID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerID="cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259" exitCode=0 Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.291521 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scjp4" event={"ID":"c2312b49-de56-41e9-b8cd-8786f68696b7","Type":"ContainerDied","Data":"cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259"} Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.291603 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scjp4" event={"ID":"c2312b49-de56-41e9-b8cd-8786f68696b7","Type":"ContainerStarted","Data":"30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27"} Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.303052 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lt66j" event={"ID":"1bb3b4ef-469e-4926-a259-48411ff90d77","Type":"ContainerStarted","Data":"7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f"} Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.305105 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkpfb" event={"ID":"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5","Type":"ContainerStarted","Data":"fbc132943e0984809bc2f3c6458619d566b5e121303a51a9a146ca1b61158b66"} Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.307095 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"ffcdf638-173d-4a35-9fb6-01cb9844af6a","Type":"ContainerStarted","Data":"b697319b66f153c47aa1815340f4c74abc5258cac6c6a680c4dad9d564fe1999"} Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.309832 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhs6h" event={"ID":"35ec334c-b741-473a-93e8-a588e1102c6a","Type":"ContainerStarted","Data":"c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8"} Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.339541 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.433907 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rg666" podStartSLOduration=11.807603225 podStartE2EDuration="53.433888348s" podCreationTimestamp="2025-12-08 18:53:10 +0000 UTC" firstStartedPulling="2025-12-08 18:53:18.907266613 +0000 UTC m=+132.556174921" lastFinishedPulling="2025-12-08 18:54:00.533551736 +0000 UTC m=+174.182460044" observedRunningTime="2025-12-08 18:54:03.393027195 +0000 UTC m=+177.041935503" watchObservedRunningTime="2025-12-08 18:54:03.433888348 +0000 UTC m=+177.082796656" Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.434359 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zhs6h" podStartSLOduration=11.385520973 podStartE2EDuration="52.434350072s" podCreationTimestamp="2025-12-08 18:53:11 +0000 UTC" firstStartedPulling="2025-12-08 18:53:19.478158486 +0000 UTC m=+133.127066804" lastFinishedPulling="2025-12-08 18:54:00.526987595 +0000 UTC m=+174.175895903" observedRunningTime="2025-12-08 18:54:03.432489663 +0000 UTC m=+177.081397991" watchObservedRunningTime="2025-12-08 18:54:03.434350072 +0000 UTC m=+177.083258390" Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.572217 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fkpfb" podStartSLOduration=10.621904264 podStartE2EDuration="50.572193928s" podCreationTimestamp="2025-12-08 18:53:13 +0000 UTC" firstStartedPulling="2025-12-08 18:53:20.576860416 +0000 UTC m=+134.225768724" lastFinishedPulling="2025-12-08 18:54:00.52715008 +0000 UTC m=+174.176058388" observedRunningTime="2025-12-08 18:54:03.566744534 +0000 UTC m=+177.215652852" watchObservedRunningTime="2025-12-08 18:54:03.572193928 +0000 UTC m=+177.221102226" Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.611920 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=14.611896954 podStartE2EDuration="14.611896954s" podCreationTimestamp="2025-12-08 18:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:03.608911437 +0000 UTC m=+177.257819745" watchObservedRunningTime="2025-12-08 18:54:03.611896954 +0000 UTC m=+177.260805262" Dec 08 18:54:03 crc kubenswrapper[5004]: I1208 18:54:03.644596 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=9.644579213 podStartE2EDuration="9.644579213s" podCreationTimestamp="2025-12-08 18:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:03.642537177 +0000 UTC m=+177.291445495" watchObservedRunningTime="2025-12-08 18:54:03.644579213 +0000 UTC m=+177.293487521" Dec 08 18:54:04 crc kubenswrapper[5004]: I1208 18:54:04.361954 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v879b" event={"ID":"aab8b6c5-e160-4589-b8d8-34647c504c26","Type":"ContainerStarted","Data":"b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14"} Dec 08 18:54:04 crc kubenswrapper[5004]: I1208 18:54:04.394243 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v879b" podStartSLOduration=11.482072253 podStartE2EDuration="54.394217795s" podCreationTimestamp="2025-12-08 18:53:10 +0000 UTC" firstStartedPulling="2025-12-08 18:53:17.692417454 +0000 UTC m=+131.341325762" lastFinishedPulling="2025-12-08 18:54:00.604562996 +0000 UTC m=+174.253471304" observedRunningTime="2025-12-08 18:54:04.392325614 +0000 UTC m=+178.041233922" watchObservedRunningTime="2025-12-08 18:54:04.394217795 +0000 UTC m=+178.043126103" Dec 08 18:54:04 crc kubenswrapper[5004]: I1208 18:54:04.510339 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:54:04 crc kubenswrapper[5004]: I1208 18:54:04.510390 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:54:04 crc kubenswrapper[5004]: I1208 18:54:04.513361 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-scjp4" podStartSLOduration=10.623355229 podStartE2EDuration="53.51334115s" podCreationTimestamp="2025-12-08 18:53:11 +0000 UTC" firstStartedPulling="2025-12-08 18:53:17.716642372 +0000 UTC m=+131.365550680" lastFinishedPulling="2025-12-08 18:54:00.606628293 +0000 UTC m=+174.255536601" observedRunningTime="2025-12-08 18:54:04.460468192 +0000 UTC m=+178.109376520" watchObservedRunningTime="2025-12-08 18:54:04.51334115 +0000 UTC m=+178.162249458" Dec 08 18:54:04 crc kubenswrapper[5004]: I1208 18:54:04.717132 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:54:04 crc kubenswrapper[5004]: I1208 18:54:04.717215 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:54:05 crc kubenswrapper[5004]: I1208 18:54:05.398309 5004 generic.go:358] "Generic (PLEG): container finished" podID="59bd67aa-41ac-42e0-883c-ba376f5256d1" containerID="938a205d5f0fd05d63ae557e60642501f3f939cdb6268ff960f5359e1babdbc8" exitCode=0 Dec 08 18:54:05 crc kubenswrapper[5004]: I1208 18:54:05.400512 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"59bd67aa-41ac-42e0-883c-ba376f5256d1","Type":"ContainerDied","Data":"938a205d5f0fd05d63ae557e60642501f3f939cdb6268ff960f5359e1babdbc8"} Dec 08 18:54:05 crc kubenswrapper[5004]: I1208 18:54:05.424100 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lt66j" podStartSLOduration=12.387582512 podStartE2EDuration="52.424057224s" podCreationTimestamp="2025-12-08 18:53:13 +0000 UTC" firstStartedPulling="2025-12-08 18:53:20.568023352 +0000 UTC m=+134.216931660" lastFinishedPulling="2025-12-08 18:54:00.604498064 +0000 UTC m=+174.253406372" observedRunningTime="2025-12-08 18:54:04.509532627 +0000 UTC m=+178.158440945" watchObservedRunningTime="2025-12-08 18:54:05.424057224 +0000 UTC m=+179.072965532" Dec 08 18:54:06 crc kubenswrapper[5004]: I1208 18:54:06.226995 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lt66j" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="registry-server" probeResult="failure" output=< Dec 08 18:54:06 crc kubenswrapper[5004]: timeout: failed to connect service ":50051" within 1s Dec 08 18:54:06 crc kubenswrapper[5004]: > Dec 08 18:54:06 crc kubenswrapper[5004]: I1208 18:54:06.237152 5004 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fkpfb" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="registry-server" probeResult="failure" output=< Dec 08 18:54:06 crc kubenswrapper[5004]: timeout: failed to connect service ":50051" within 1s Dec 08 18:54:06 crc kubenswrapper[5004]: > Dec 08 18:54:06 crc kubenswrapper[5004]: I1208 18:54:06.411106 5004 generic.go:358] "Generic (PLEG): container finished" podID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerID="32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb" exitCode=0 Dec 08 18:54:06 crc kubenswrapper[5004]: I1208 18:54:06.411232 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7l7m" event={"ID":"a4169fd9-a66b-4a3f-beca-26641d59434b","Type":"ContainerDied","Data":"32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb"} Dec 08 18:54:06 crc kubenswrapper[5004]: I1208 18:54:06.417155 5004 generic.go:358] "Generic (PLEG): container finished" podID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerID="0f378cc54c0b4e311d437fddf4e6103425635ed11a5f7f6a821741831915e028" exitCode=0 Dec 08 18:54:06 crc kubenswrapper[5004]: I1208 18:54:06.417316 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jcq" event={"ID":"0196edda-a1e0-4e11-b84d-15988bdf3507","Type":"ContainerDied","Data":"0f378cc54c0b4e311d437fddf4e6103425635ed11a5f7f6a821741831915e028"} Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.326983 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.330526 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bd67aa-41ac-42e0-883c-ba376f5256d1-kube-api-access\") pod \"59bd67aa-41ac-42e0-883c-ba376f5256d1\" (UID: \"59bd67aa-41ac-42e0-883c-ba376f5256d1\") " Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.330750 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bd67aa-41ac-42e0-883c-ba376f5256d1-kubelet-dir\") pod \"59bd67aa-41ac-42e0-883c-ba376f5256d1\" (UID: \"59bd67aa-41ac-42e0-883c-ba376f5256d1\") " Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.330925 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59bd67aa-41ac-42e0-883c-ba376f5256d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "59bd67aa-41ac-42e0-883c-ba376f5256d1" (UID: "59bd67aa-41ac-42e0-883c-ba376f5256d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.331102 5004 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bd67aa-41ac-42e0-883c-ba376f5256d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.361201 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bd67aa-41ac-42e0-883c-ba376f5256d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "59bd67aa-41ac-42e0-883c-ba376f5256d1" (UID: "59bd67aa-41ac-42e0-883c-ba376f5256d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.433233 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bd67aa-41ac-42e0-883c-ba376f5256d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.435563 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"59bd67aa-41ac-42e0-883c-ba376f5256d1","Type":"ContainerDied","Data":"dce788ca589adc51af5ca31325b80af3bb17a7e5ae4be950000500a3a96b9187"} Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.435631 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dce788ca589adc51af5ca31325b80af3bb17a7e5ae4be950000500a3a96b9187" Dec 08 18:54:07 crc kubenswrapper[5004]: I1208 18:54:07.435778 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:08 crc kubenswrapper[5004]: I1208 18:54:08.474545 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7l7m" event={"ID":"a4169fd9-a66b-4a3f-beca-26641d59434b","Type":"ContainerStarted","Data":"c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7"} Dec 08 18:54:08 crc kubenswrapper[5004]: I1208 18:54:08.477461 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jcq" event={"ID":"0196edda-a1e0-4e11-b84d-15988bdf3507","Type":"ContainerStarted","Data":"ef962ce23d0dae5c5a0257d08c61c4fc1554390fdace6b14f325f7c6b7910851"} Dec 08 18:54:08 crc kubenswrapper[5004]: I1208 18:54:08.503563 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t7l7m" podStartSLOduration=12.997756438 podStartE2EDuration="54.50353994s" podCreationTimestamp="2025-12-08 18:53:14 +0000 UTC" firstStartedPulling="2025-12-08 18:53:19.098615458 +0000 UTC m=+132.747523766" lastFinishedPulling="2025-12-08 18:54:00.60439896 +0000 UTC m=+174.253307268" observedRunningTime="2025-12-08 18:54:08.500547324 +0000 UTC m=+182.149455632" watchObservedRunningTime="2025-12-08 18:54:08.50353994 +0000 UTC m=+182.152448248" Dec 08 18:54:08 crc kubenswrapper[5004]: I1208 18:54:08.520572 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h9jcq" podStartSLOduration=15.389103145 podStartE2EDuration="55.520554606s" podCreationTimestamp="2025-12-08 18:53:13 +0000 UTC" firstStartedPulling="2025-12-08 18:53:20.558674613 +0000 UTC m=+134.207582921" lastFinishedPulling="2025-12-08 18:54:00.690126074 +0000 UTC m=+174.339034382" observedRunningTime="2025-12-08 18:54:08.518124818 +0000 UTC m=+182.167033146" watchObservedRunningTime="2025-12-08 18:54:08.520554606 +0000 UTC m=+182.169462924" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.125342 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.125754 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.186812 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.535286 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.661027 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v879b" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.661102 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-v879b" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.696472 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v879b" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.962360 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.962428 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.966255 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:54:11 crc kubenswrapper[5004]: I1208 18:54:11.966321 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:54:12 crc kubenswrapper[5004]: I1208 18:54:12.006955 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:54:12 crc kubenswrapper[5004]: I1208 18:54:12.018470 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:54:12 crc kubenswrapper[5004]: I1208 18:54:12.543152 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:54:12 crc kubenswrapper[5004]: I1208 18:54:12.552029 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:54:12 crc kubenswrapper[5004]: I1208 18:54:12.559660 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v879b" Dec 08 18:54:13 crc kubenswrapper[5004]: I1208 18:54:13.107495 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-scjp4"] Dec 08 18:54:13 crc kubenswrapper[5004]: I1208 18:54:13.704527 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhs6h"] Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.328356 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.328756 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.380563 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.516922 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-scjp4" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerName="registry-server" containerID="cri-o://30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27" gracePeriod=2 Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.517384 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zhs6h" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" containerName="registry-server" containerID="cri-o://c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8" gracePeriod=2 Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.556728 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.557359 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.599410 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.732796 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.795820 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.900348 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.900407 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:54:14 crc kubenswrapper[5004]: I1208 18:54:14.950890 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.021776 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.105837 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.146327 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-catalog-content\") pod \"35ec334c-b741-473a-93e8-a588e1102c6a\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.146426 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-catalog-content\") pod \"c2312b49-de56-41e9-b8cd-8786f68696b7\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.146517 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq66s\" (UniqueName: \"kubernetes.io/projected/35ec334c-b741-473a-93e8-a588e1102c6a-kube-api-access-lq66s\") pod \"35ec334c-b741-473a-93e8-a588e1102c6a\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.146558 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdmm2\" (UniqueName: \"kubernetes.io/projected/c2312b49-de56-41e9-b8cd-8786f68696b7-kube-api-access-hdmm2\") pod \"c2312b49-de56-41e9-b8cd-8786f68696b7\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.146600 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-utilities\") pod \"35ec334c-b741-473a-93e8-a588e1102c6a\" (UID: \"35ec334c-b741-473a-93e8-a588e1102c6a\") " Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.146654 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-utilities\") pod \"c2312b49-de56-41e9-b8cd-8786f68696b7\" (UID: \"c2312b49-de56-41e9-b8cd-8786f68696b7\") " Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.147515 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-utilities" (OuterVolumeSpecName: "utilities") pod "c2312b49-de56-41e9-b8cd-8786f68696b7" (UID: "c2312b49-de56-41e9-b8cd-8786f68696b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.147971 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-utilities" (OuterVolumeSpecName: "utilities") pod "35ec334c-b741-473a-93e8-a588e1102c6a" (UID: "35ec334c-b741-473a-93e8-a588e1102c6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.164337 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2312b49-de56-41e9-b8cd-8786f68696b7-kube-api-access-hdmm2" (OuterVolumeSpecName: "kube-api-access-hdmm2") pod "c2312b49-de56-41e9-b8cd-8786f68696b7" (UID: "c2312b49-de56-41e9-b8cd-8786f68696b7"). InnerVolumeSpecName "kube-api-access-hdmm2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.164457 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ec334c-b741-473a-93e8-a588e1102c6a-kube-api-access-lq66s" (OuterVolumeSpecName: "kube-api-access-lq66s") pod "35ec334c-b741-473a-93e8-a588e1102c6a" (UID: "35ec334c-b741-473a-93e8-a588e1102c6a"). InnerVolumeSpecName "kube-api-access-lq66s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.181189 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2312b49-de56-41e9-b8cd-8786f68696b7" (UID: "c2312b49-de56-41e9-b8cd-8786f68696b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.197446 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35ec334c-b741-473a-93e8-a588e1102c6a" (UID: "35ec334c-b741-473a-93e8-a588e1102c6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.248813 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lq66s\" (UniqueName: \"kubernetes.io/projected/35ec334c-b741-473a-93e8-a588e1102c6a-kube-api-access-lq66s\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.248865 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hdmm2\" (UniqueName: \"kubernetes.io/projected/c2312b49-de56-41e9-b8cd-8786f68696b7-kube-api-access-hdmm2\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.248879 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.248897 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.248908 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ec334c-b741-473a-93e8-a588e1102c6a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.248918 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2312b49-de56-41e9-b8cd-8786f68696b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.525057 5004 generic.go:358] "Generic (PLEG): container finished" podID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerID="30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27" exitCode=0 Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.525190 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scjp4" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.525179 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scjp4" event={"ID":"c2312b49-de56-41e9-b8cd-8786f68696b7","Type":"ContainerDied","Data":"30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27"} Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.525725 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scjp4" event={"ID":"c2312b49-de56-41e9-b8cd-8786f68696b7","Type":"ContainerDied","Data":"409846b07dc062323baa00666d71f5efb160a2883d862a53b9151f30b7c484a4"} Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.525751 5004 scope.go:117] "RemoveContainer" containerID="30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.527767 5004 generic.go:358] "Generic (PLEG): container finished" podID="35ec334c-b741-473a-93e8-a588e1102c6a" containerID="c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8" exitCode=0 Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.528424 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhs6h" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.529044 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhs6h" event={"ID":"35ec334c-b741-473a-93e8-a588e1102c6a","Type":"ContainerDied","Data":"c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8"} Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.529091 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhs6h" event={"ID":"35ec334c-b741-473a-93e8-a588e1102c6a","Type":"ContainerDied","Data":"0e4e61fdaebb417dbeafca951b1e261b31c2066a516f59b556957a9214eaf07c"} Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.563678 5004 scope.go:117] "RemoveContainer" containerID="cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.576652 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-scjp4"] Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.582933 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-scjp4"] Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.584225 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.586706 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhs6h"] Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.590667 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zhs6h"] Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.592589 5004 scope.go:117] "RemoveContainer" containerID="4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.609115 5004 scope.go:117] "RemoveContainer" containerID="30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27" Dec 08 18:54:15 crc kubenswrapper[5004]: E1208 18:54:15.609550 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27\": container with ID starting with 30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27 not found: ID does not exist" containerID="30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.609659 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27"} err="failed to get container status \"30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27\": rpc error: code = NotFound desc = could not find container \"30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27\": container with ID starting with 30d6aee2f785999b5b31153c820edf720b204f083740c9e0f0e0dd92ef99bc27 not found: ID does not exist" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.609751 5004 scope.go:117] "RemoveContainer" containerID="cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259" Dec 08 18:54:15 crc kubenswrapper[5004]: E1208 18:54:15.610317 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259\": container with ID starting with cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259 not found: ID does not exist" containerID="cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.610363 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259"} err="failed to get container status \"cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259\": rpc error: code = NotFound desc = could not find container \"cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259\": container with ID starting with cd3793a47341f8e6f5ef67d5eb8a8ed48ff859a064ba1c5014f45f883ac0d259 not found: ID does not exist" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.610389 5004 scope.go:117] "RemoveContainer" containerID="4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548" Dec 08 18:54:15 crc kubenswrapper[5004]: E1208 18:54:15.610701 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548\": container with ID starting with 4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548 not found: ID does not exist" containerID="4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.610810 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548"} err="failed to get container status \"4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548\": rpc error: code = NotFound desc = could not find container \"4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548\": container with ID starting with 4bf7ca7a658581ea4ed67937d85126107983eeae7c9596782b35ddcbd3fe9548 not found: ID does not exist" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.610893 5004 scope.go:117] "RemoveContainer" containerID="c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.628339 5004 scope.go:117] "RemoveContainer" containerID="9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.650477 5004 scope.go:117] "RemoveContainer" containerID="8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.691333 5004 scope.go:117] "RemoveContainer" containerID="c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8" Dec 08 18:54:15 crc kubenswrapper[5004]: E1208 18:54:15.691710 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8\": container with ID starting with c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8 not found: ID does not exist" containerID="c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.691749 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8"} err="failed to get container status \"c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8\": rpc error: code = NotFound desc = could not find container \"c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8\": container with ID starting with c9fdf6354fa0c6026eef1eaca6a7562bb1ff6af9ec4c73aef57b13b154c64fa8 not found: ID does not exist" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.691773 5004 scope.go:117] "RemoveContainer" containerID="9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140" Dec 08 18:54:15 crc kubenswrapper[5004]: E1208 18:54:15.692066 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140\": container with ID starting with 9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140 not found: ID does not exist" containerID="9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.692123 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140"} err="failed to get container status \"9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140\": rpc error: code = NotFound desc = could not find container \"9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140\": container with ID starting with 9b3207f59e9ccbdf068c3ff49d45192d2f4e666ee2cf3ffc744d88cc932bb140 not found: ID does not exist" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.692177 5004 scope.go:117] "RemoveContainer" containerID="8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502" Dec 08 18:54:15 crc kubenswrapper[5004]: E1208 18:54:15.692503 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502\": container with ID starting with 8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502 not found: ID does not exist" containerID="8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502" Dec 08 18:54:15 crc kubenswrapper[5004]: I1208 18:54:15.692567 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502"} err="failed to get container status \"8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502\": rpc error: code = NotFound desc = could not find container \"8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502\": container with ID starting with 8e348409d580e67dbbc1f79cdd5c3fc51ee9127eb0645bc1caa29799ab19d502 not found: ID does not exist" Dec 08 18:54:16 crc kubenswrapper[5004]: I1208 18:54:16.103784 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lt66j"] Dec 08 18:54:16 crc kubenswrapper[5004]: I1208 18:54:16.535404 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lt66j" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="registry-server" containerID="cri-o://7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f" gracePeriod=2 Dec 08 18:54:16 crc kubenswrapper[5004]: I1208 18:54:16.727846 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" path="/var/lib/kubelet/pods/35ec334c-b741-473a-93e8-a588e1102c6a/volumes" Dec 08 18:54:16 crc kubenswrapper[5004]: I1208 18:54:16.728981 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" path="/var/lib/kubelet/pods/c2312b49-de56-41e9-b8cd-8786f68696b7/volumes" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.460036 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.542299 5004 generic.go:358] "Generic (PLEG): container finished" podID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerID="7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f" exitCode=0 Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.542603 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lt66j" event={"ID":"1bb3b4ef-469e-4926-a259-48411ff90d77","Type":"ContainerDied","Data":"7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f"} Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.542638 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lt66j" event={"ID":"1bb3b4ef-469e-4926-a259-48411ff90d77","Type":"ContainerDied","Data":"1601813691b3100712ca88ede80428f4a147c3b0da5fdbab1268acd9c7fbd6bf"} Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.542659 5004 scope.go:117] "RemoveContainer" containerID="7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.542706 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lt66j" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.558583 5004 scope.go:117] "RemoveContainer" containerID="3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.571581 5004 scope.go:117] "RemoveContainer" containerID="5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.587544 5004 scope.go:117] "RemoveContainer" containerID="7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f" Dec 08 18:54:17 crc kubenswrapper[5004]: E1208 18:54:17.587936 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f\": container with ID starting with 7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f not found: ID does not exist" containerID="7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.587973 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f"} err="failed to get container status \"7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f\": rpc error: code = NotFound desc = could not find container \"7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f\": container with ID starting with 7330e17790df78a7a176ee38f1edae6b76c9ceb595fe43e5043126ecd1229c2f not found: ID does not exist" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.587996 5004 scope.go:117] "RemoveContainer" containerID="3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552" Dec 08 18:54:17 crc kubenswrapper[5004]: E1208 18:54:17.588348 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552\": container with ID starting with 3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552 not found: ID does not exist" containerID="3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.588389 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552"} err="failed to get container status \"3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552\": rpc error: code = NotFound desc = could not find container \"3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552\": container with ID starting with 3bfa0ef46e370d3bf2fa145b64b5a57349373a29fbea1391bee0f8f3f6073552 not found: ID does not exist" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.588433 5004 scope.go:117] "RemoveContainer" containerID="5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9" Dec 08 18:54:17 crc kubenswrapper[5004]: E1208 18:54:17.588696 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9\": container with ID starting with 5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9 not found: ID does not exist" containerID="5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.588716 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9"} err="failed to get container status \"5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9\": rpc error: code = NotFound desc = could not find container \"5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9\": container with ID starting with 5fe90fabac285be9cee0442f9fb256cff4a136db707494a11807648c00e5a9a9 not found: ID does not exist" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.596874 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-utilities\") pod \"1bb3b4ef-469e-4926-a259-48411ff90d77\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.597057 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj964\" (UniqueName: \"kubernetes.io/projected/1bb3b4ef-469e-4926-a259-48411ff90d77-kube-api-access-sj964\") pod \"1bb3b4ef-469e-4926-a259-48411ff90d77\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.597118 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-catalog-content\") pod \"1bb3b4ef-469e-4926-a259-48411ff90d77\" (UID: \"1bb3b4ef-469e-4926-a259-48411ff90d77\") " Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.597973 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-utilities" (OuterVolumeSpecName: "utilities") pod "1bb3b4ef-469e-4926-a259-48411ff90d77" (UID: "1bb3b4ef-469e-4926-a259-48411ff90d77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.603431 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb3b4ef-469e-4926-a259-48411ff90d77-kube-api-access-sj964" (OuterVolumeSpecName: "kube-api-access-sj964") pod "1bb3b4ef-469e-4926-a259-48411ff90d77" (UID: "1bb3b4ef-469e-4926-a259-48411ff90d77"). InnerVolumeSpecName "kube-api-access-sj964". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.607740 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bb3b4ef-469e-4926-a259-48411ff90d77" (UID: "1bb3b4ef-469e-4926-a259-48411ff90d77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.700069 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.700120 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sj964\" (UniqueName: \"kubernetes.io/projected/1bb3b4ef-469e-4926-a259-48411ff90d77-kube-api-access-sj964\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.700133 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b4ef-469e-4926-a259-48411ff90d77-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.871102 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lt66j"] Dec 08 18:54:17 crc kubenswrapper[5004]: I1208 18:54:17.875432 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lt66j"] Dec 08 18:54:18 crc kubenswrapper[5004]: I1208 18:54:18.501666 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7l7m"] Dec 08 18:54:18 crc kubenswrapper[5004]: I1208 18:54:18.548604 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t7l7m" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerName="registry-server" containerID="cri-o://c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7" gracePeriod=2 Dec 08 18:54:18 crc kubenswrapper[5004]: I1208 18:54:18.717224 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" path="/var/lib/kubelet/pods/1bb3b4ef-469e-4926-a259-48411ff90d77/volumes" Dec 08 18:54:18 crc kubenswrapper[5004]: I1208 18:54:18.962632 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.120421 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-utilities\") pod \"a4169fd9-a66b-4a3f-beca-26641d59434b\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.120601 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9spd7\" (UniqueName: \"kubernetes.io/projected/a4169fd9-a66b-4a3f-beca-26641d59434b-kube-api-access-9spd7\") pod \"a4169fd9-a66b-4a3f-beca-26641d59434b\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.120639 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-catalog-content\") pod \"a4169fd9-a66b-4a3f-beca-26641d59434b\" (UID: \"a4169fd9-a66b-4a3f-beca-26641d59434b\") " Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.121716 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-utilities" (OuterVolumeSpecName: "utilities") pod "a4169fd9-a66b-4a3f-beca-26641d59434b" (UID: "a4169fd9-a66b-4a3f-beca-26641d59434b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.129568 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4169fd9-a66b-4a3f-beca-26641d59434b-kube-api-access-9spd7" (OuterVolumeSpecName: "kube-api-access-9spd7") pod "a4169fd9-a66b-4a3f-beca-26641d59434b" (UID: "a4169fd9-a66b-4a3f-beca-26641d59434b"). InnerVolumeSpecName "kube-api-access-9spd7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.207354 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4169fd9-a66b-4a3f-beca-26641d59434b" (UID: "a4169fd9-a66b-4a3f-beca-26641d59434b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.221645 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.221697 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9spd7\" (UniqueName: \"kubernetes.io/projected/a4169fd9-a66b-4a3f-beca-26641d59434b-kube-api-access-9spd7\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.221710 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169fd9-a66b-4a3f-beca-26641d59434b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.559122 5004 generic.go:358] "Generic (PLEG): container finished" podID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerID="c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7" exitCode=0 Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.559183 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7l7m" event={"ID":"a4169fd9-a66b-4a3f-beca-26641d59434b","Type":"ContainerDied","Data":"c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7"} Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.559606 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7l7m" event={"ID":"a4169fd9-a66b-4a3f-beca-26641d59434b","Type":"ContainerDied","Data":"900221d5f1f170df7193461c9ff385ce1f247bf81db19ae67c273d163052d0c9"} Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.559637 5004 scope.go:117] "RemoveContainer" containerID="c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.559259 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7l7m" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.595433 5004 scope.go:117] "RemoveContainer" containerID="32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.597438 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7l7m"] Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.602172 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t7l7m"] Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.623282 5004 scope.go:117] "RemoveContainer" containerID="a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.639841 5004 scope.go:117] "RemoveContainer" containerID="c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7" Dec 08 18:54:19 crc kubenswrapper[5004]: E1208 18:54:19.640521 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7\": container with ID starting with c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7 not found: ID does not exist" containerID="c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.640558 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7"} err="failed to get container status \"c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7\": rpc error: code = NotFound desc = could not find container \"c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7\": container with ID starting with c21befa365c4ab47349da926b246039d2b2e369efda0fa82e7bf413265f96ef7 not found: ID does not exist" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.640617 5004 scope.go:117] "RemoveContainer" containerID="32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb" Dec 08 18:54:19 crc kubenswrapper[5004]: E1208 18:54:19.641193 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb\": container with ID starting with 32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb not found: ID does not exist" containerID="32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.641255 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb"} err="failed to get container status \"32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb\": rpc error: code = NotFound desc = could not find container \"32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb\": container with ID starting with 32c136aec99d3f7a479b3b0651fcec326355f9d1bed1bc6bbfd11d754e5561cb not found: ID does not exist" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.641295 5004 scope.go:117] "RemoveContainer" containerID="a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9" Dec 08 18:54:19 crc kubenswrapper[5004]: E1208 18:54:19.641634 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9\": container with ID starting with a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9 not found: ID does not exist" containerID="a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9" Dec 08 18:54:19 crc kubenswrapper[5004]: I1208 18:54:19.641668 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9"} err="failed to get container status \"a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9\": rpc error: code = NotFound desc = could not find container \"a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9\": container with ID starting with a668f464b5628fada62eaef514e938728991ed25d90b4ef4d2b76e15895645d9 not found: ID does not exist" Dec 08 18:54:20 crc kubenswrapper[5004]: I1208 18:54:20.718658 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" path="/var/lib/kubelet/pods/a4169fd9-a66b-4a3f-beca-26641d59434b/volumes" Dec 08 18:54:30 crc kubenswrapper[5004]: I1208 18:54:30.347178 5004 ???:1] "http: TLS handshake error from 192.168.126.11:37118: no serving certificate available for the kubelet" Dec 08 18:54:35 crc kubenswrapper[5004]: I1208 18:54:35.244749 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r4pkx"] Dec 08 18:54:36 crc kubenswrapper[5004]: I1208 18:54:36.596191 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8"] Dec 08 18:54:36 crc kubenswrapper[5004]: I1208 18:54:36.596451 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" podUID="efadb339-01d5-42c1-ba13-15d4c2b97b2d" containerName="controller-manager" containerID="cri-o://7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5" gracePeriod=30 Dec 08 18:54:36 crc kubenswrapper[5004]: I1208 18:54:36.630547 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v"] Dec 08 18:54:36 crc kubenswrapper[5004]: I1208 18:54:36.631202 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" podUID="0e8a8547-6e06-49fc-846c-47f77b4ad6c8" containerName="route-controller-manager" containerID="cri-o://eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16" gracePeriod=30 Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.219354 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.268527 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-tmp\") pod \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.268599 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfzfm\" (UniqueName: \"kubernetes.io/projected/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-kube-api-access-xfzfm\") pod \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.268653 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-serving-cert\") pod \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.268692 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-config\") pod \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.268762 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-client-ca\") pod \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\" (UID: \"0e8a8547-6e06-49fc-846c-47f77b4ad6c8\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.269894 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-tmp" (OuterVolumeSpecName: "tmp") pod "0e8a8547-6e06-49fc-846c-47f77b4ad6c8" (UID: "0e8a8547-6e06-49fc-846c-47f77b4ad6c8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.270676 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-config" (OuterVolumeSpecName: "config") pod "0e8a8547-6e06-49fc-846c-47f77b4ad6c8" (UID: "0e8a8547-6e06-49fc-846c-47f77b4ad6c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.271205 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e8a8547-6e06-49fc-846c-47f77b4ad6c8" (UID: "0e8a8547-6e06-49fc-846c-47f77b4ad6c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.272476 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4"] Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273281 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerName="extract-utilities" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273298 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerName="extract-utilities" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273319 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerName="extract-content" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273325 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerName="extract-content" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273340 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="extract-content" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273349 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="extract-content" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273357 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273365 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273374 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerName="extract-content" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273380 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerName="extract-content" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273390 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e8a8547-6e06-49fc-846c-47f77b4ad6c8" containerName="route-controller-manager" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273398 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8a8547-6e06-49fc-846c-47f77b4ad6c8" containerName="route-controller-manager" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273406 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="extract-utilities" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273412 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="extract-utilities" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273423 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273429 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273440 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" containerName="extract-content" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273445 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" containerName="extract-content" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273454 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273460 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273474 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273479 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273488 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerName="extract-utilities" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273494 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerName="extract-utilities" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273500 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273505 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273514 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" containerName="extract-utilities" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273521 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" containerName="extract-utilities" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273529 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59bd67aa-41ac-42e0-883c-ba376f5256d1" containerName="pruner" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273534 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bd67aa-41ac-42e0-883c-ba376f5256d1" containerName="pruner" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273628 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="a1aa164d-cf7a-4c71-90db-3488e29d60a2" containerName="kube-multus-additional-cni-plugins" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273644 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="35ec334c-b741-473a-93e8-a588e1102c6a" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273652 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4169fd9-a66b-4a3f-beca-26641d59434b" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273659 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e8a8547-6e06-49fc-846c-47f77b4ad6c8" containerName="route-controller-manager" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273667 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="1bb3b4ef-469e-4926-a259-48411ff90d77" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273677 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2312b49-de56-41e9-b8cd-8786f68696b7" containerName="registry-server" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.273685 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="59bd67aa-41ac-42e0-883c-ba376f5256d1" containerName="pruner" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.278469 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-kube-api-access-xfzfm" (OuterVolumeSpecName: "kube-api-access-xfzfm") pod "0e8a8547-6e06-49fc-846c-47f77b4ad6c8" (UID: "0e8a8547-6e06-49fc-846c-47f77b4ad6c8"). InnerVolumeSpecName "kube-api-access-xfzfm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.282014 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e8a8547-6e06-49fc-846c-47f77b4ad6c8" (UID: "0e8a8547-6e06-49fc-846c-47f77b4ad6c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.305941 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4"] Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.306117 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.370888 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4938969e-b368-4aa2-ab42-5ff95af63309-serving-cert\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371646 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4938969e-b368-4aa2-ab42-5ff95af63309-tmp\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371726 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-config\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371757 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkfd\" (UniqueName: \"kubernetes.io/projected/4938969e-b368-4aa2-ab42-5ff95af63309-kube-api-access-zfkfd\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371783 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-client-ca\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371831 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371847 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfzfm\" (UniqueName: \"kubernetes.io/projected/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-kube-api-access-xfzfm\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371859 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371872 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.371882 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e8a8547-6e06-49fc-846c-47f77b4ad6c8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.466055 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.473534 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-config\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.473584 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfkfd\" (UniqueName: \"kubernetes.io/projected/4938969e-b368-4aa2-ab42-5ff95af63309-kube-api-access-zfkfd\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.473617 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-client-ca\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.473642 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4938969e-b368-4aa2-ab42-5ff95af63309-serving-cert\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.473700 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4938969e-b368-4aa2-ab42-5ff95af63309-tmp\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.474279 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4938969e-b368-4aa2-ab42-5ff95af63309-tmp\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.475356 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-client-ca\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.475794 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-config\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.479838 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4938969e-b368-4aa2-ab42-5ff95af63309-serving-cert\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.496563 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfkfd\" (UniqueName: \"kubernetes.io/projected/4938969e-b368-4aa2-ab42-5ff95af63309-kube-api-access-zfkfd\") pod \"route-controller-manager-7bcf4ff857-fv2b4\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.499714 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g"] Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.500319 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="efadb339-01d5-42c1-ba13-15d4c2b97b2d" containerName="controller-manager" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.500337 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="efadb339-01d5-42c1-ba13-15d4c2b97b2d" containerName="controller-manager" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.500447 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="efadb339-01d5-42c1-ba13-15d4c2b97b2d" containerName="controller-manager" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.508896 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.528037 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g"] Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.574192 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efadb339-01d5-42c1-ba13-15d4c2b97b2d-serving-cert\") pod \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.574252 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-proxy-ca-bundles\") pod \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.574281 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-client-ca\") pod \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.574333 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/efadb339-01d5-42c1-ba13-15d4c2b97b2d-tmp\") pod \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.574845 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-config\") pod \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575086 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efadb339-01d5-42c1-ba13-15d4c2b97b2d-tmp" (OuterVolumeSpecName: "tmp") pod "efadb339-01d5-42c1-ba13-15d4c2b97b2d" (UID: "efadb339-01d5-42c1-ba13-15d4c2b97b2d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575093 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqdpf\" (UniqueName: \"kubernetes.io/projected/efadb339-01d5-42c1-ba13-15d4c2b97b2d-kube-api-access-mqdpf\") pod \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\" (UID: \"efadb339-01d5-42c1-ba13-15d4c2b97b2d\") " Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575384 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "efadb339-01d5-42c1-ba13-15d4c2b97b2d" (UID: "efadb339-01d5-42c1-ba13-15d4c2b97b2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575518 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385ae430-6acc-4039-bc95-e19b4f69f5aa-serving-cert\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575597 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "efadb339-01d5-42c1-ba13-15d4c2b97b2d" (UID: "efadb339-01d5-42c1-ba13-15d4c2b97b2d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575601 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-config" (OuterVolumeSpecName: "config") pod "efadb339-01d5-42c1-ba13-15d4c2b97b2d" (UID: "efadb339-01d5-42c1-ba13-15d4c2b97b2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575734 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-proxy-ca-bundles\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575895 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/385ae430-6acc-4039-bc95-e19b4f69f5aa-tmp\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.575958 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcstg\" (UniqueName: \"kubernetes.io/projected/385ae430-6acc-4039-bc95-e19b4f69f5aa-kube-api-access-mcstg\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.576012 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-config\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.576033 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-client-ca\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.576117 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.576130 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.576143 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efadb339-01d5-42c1-ba13-15d4c2b97b2d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.576154 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/efadb339-01d5-42c1-ba13-15d4c2b97b2d-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.579323 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efadb339-01d5-42c1-ba13-15d4c2b97b2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "efadb339-01d5-42c1-ba13-15d4c2b97b2d" (UID: "efadb339-01d5-42c1-ba13-15d4c2b97b2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.579478 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efadb339-01d5-42c1-ba13-15d4c2b97b2d-kube-api-access-mqdpf" (OuterVolumeSpecName: "kube-api-access-mqdpf") pod "efadb339-01d5-42c1-ba13-15d4c2b97b2d" (UID: "efadb339-01d5-42c1-ba13-15d4c2b97b2d"). InnerVolumeSpecName "kube-api-access-mqdpf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.639540 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.676876 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/385ae430-6acc-4039-bc95-e19b4f69f5aa-tmp\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.676929 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mcstg\" (UniqueName: \"kubernetes.io/projected/385ae430-6acc-4039-bc95-e19b4f69f5aa-kube-api-access-mcstg\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.676973 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-config\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.676997 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-client-ca\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.677056 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385ae430-6acc-4039-bc95-e19b4f69f5aa-serving-cert\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.677124 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-proxy-ca-bundles\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.677177 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mqdpf\" (UniqueName: \"kubernetes.io/projected/efadb339-01d5-42c1-ba13-15d4c2b97b2d-kube-api-access-mqdpf\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.677190 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efadb339-01d5-42c1-ba13-15d4c2b97b2d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.678612 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-client-ca\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.678648 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-proxy-ca-bundles\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.679964 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-config\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.680671 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/385ae430-6acc-4039-bc95-e19b4f69f5aa-tmp\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.681380 5004 generic.go:358] "Generic (PLEG): container finished" podID="0e8a8547-6e06-49fc-846c-47f77b4ad6c8" containerID="eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16" exitCode=0 Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.681594 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" event={"ID":"0e8a8547-6e06-49fc-846c-47f77b4ad6c8","Type":"ContainerDied","Data":"eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16"} Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.681671 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" event={"ID":"0e8a8547-6e06-49fc-846c-47f77b4ad6c8","Type":"ContainerDied","Data":"a5dfee185d5c43f8df6b7158275979e63e1c76d500d60fc10481254217d14f9e"} Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.681697 5004 scope.go:117] "RemoveContainer" containerID="eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.684475 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.685873 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385ae430-6acc-4039-bc95-e19b4f69f5aa-serving-cert\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.702736 5004 generic.go:358] "Generic (PLEG): container finished" podID="efadb339-01d5-42c1-ba13-15d4c2b97b2d" containerID="7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5" exitCode=0 Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.702833 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" event={"ID":"efadb339-01d5-42c1-ba13-15d4c2b97b2d","Type":"ContainerDied","Data":"7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5"} Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.702871 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" event={"ID":"efadb339-01d5-42c1-ba13-15d4c2b97b2d","Type":"ContainerDied","Data":"bda165f45b8d922b3c5810467d1685316ffad979f0e3c64a402e7b02fb87a094"} Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.702976 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.702987 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcstg\" (UniqueName: \"kubernetes.io/projected/385ae430-6acc-4039-bc95-e19b4f69f5aa-kube-api-access-mcstg\") pod \"controller-manager-84b8ff8d65-wpv7g\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.744377 5004 scope.go:117] "RemoveContainer" containerID="eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16" Dec 08 18:54:37 crc kubenswrapper[5004]: E1208 18:54:37.748329 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16\": container with ID starting with eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16 not found: ID does not exist" containerID="eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.748621 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16"} err="failed to get container status \"eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16\": rpc error: code = NotFound desc = could not find container \"eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16\": container with ID starting with eae08b841e31e736dc1fb51d1f16a262659388886c75364ba58109a4ab27ca16 not found: ID does not exist" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.748807 5004 scope.go:117] "RemoveContainer" containerID="7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.749829 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v"] Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.754011 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7474799fdc-lbf7v"] Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.768709 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8"] Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.773697 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cbdf9cf55-bmwj8"] Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.808887 5004 scope.go:117] "RemoveContainer" containerID="7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5" Dec 08 18:54:37 crc kubenswrapper[5004]: E1208 18:54:37.809825 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5\": container with ID starting with 7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5 not found: ID does not exist" containerID="7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.809862 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5"} err="failed to get container status \"7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5\": rpc error: code = NotFound desc = could not find container \"7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5\": container with ID starting with 7a496d21138463062a4970cf01237969330d2b601ca8a0a8e5f92aae729a3fe5 not found: ID does not exist" Dec 08 18:54:37 crc kubenswrapper[5004]: I1208 18:54:37.830367 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.036455 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g"] Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.114058 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4"] Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.723428 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e8a8547-6e06-49fc-846c-47f77b4ad6c8" path="/var/lib/kubelet/pods/0e8a8547-6e06-49fc-846c-47f77b4ad6c8/volumes" Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.724581 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efadb339-01d5-42c1-ba13-15d4c2b97b2d" path="/var/lib/kubelet/pods/efadb339-01d5-42c1-ba13-15d4c2b97b2d/volumes" Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.724985 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.725007 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" event={"ID":"4938969e-b368-4aa2-ab42-5ff95af63309","Type":"ContainerStarted","Data":"8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb"} Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.725022 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" event={"ID":"4938969e-b368-4aa2-ab42-5ff95af63309","Type":"ContainerStarted","Data":"2ae84feb2b514f53a6640de6e65759b0215168c0c12bd32028e32c92f7d0ea41"} Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.733177 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" event={"ID":"385ae430-6acc-4039-bc95-e19b4f69f5aa","Type":"ContainerStarted","Data":"51df531476d0d274246b659fd909d424c5a21e27b007f945b21b9c320caf11bc"} Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.733236 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" event={"ID":"385ae430-6acc-4039-bc95-e19b4f69f5aa","Type":"ContainerStarted","Data":"c624b31a036064b022e14e890f0db60c193648dca05ac01c1ba6857e793174bd"} Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.733502 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.771749 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" podStartSLOduration=2.771713102 podStartE2EDuration="2.771713102s" podCreationTimestamp="2025-12-08 18:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:38.747313723 +0000 UTC m=+212.396222031" watchObservedRunningTime="2025-12-08 18:54:38.771713102 +0000 UTC m=+212.420621410" Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.771930 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" podStartSLOduration=2.771922549 podStartE2EDuration="2.771922549s" podCreationTimestamp="2025-12-08 18:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:38.767560916 +0000 UTC m=+212.416469224" watchObservedRunningTime="2025-12-08 18:54:38.771922549 +0000 UTC m=+212.420830887" Dec 08 18:54:38 crc kubenswrapper[5004]: I1208 18:54:38.890807 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:54:39 crc kubenswrapper[5004]: I1208 18:54:39.014406 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.659671 5004 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.671782 5004 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.671849 5004 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.672125 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.673689 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1" gracePeriod=15 Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.674596 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894" gracePeriod=15 Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.674781 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62" gracePeriod=15 Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.674805 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911" gracePeriod=15 Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.674613 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7" gracePeriod=15 Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.677634 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.699831 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.699877 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.699887 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.699920 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.699928 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.699970 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.699978 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.699998 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.700005 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.700027 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.700035 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.700058 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.700106 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.700119 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.700134 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.701833 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.701856 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.701878 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.701897 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.701906 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.701915 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.701925 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.701937 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.702199 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.702211 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.702269 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.702278 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.702495 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.677806 5004 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.715546 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.740387 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.740446 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.740533 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.740616 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.740638 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.841821 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.841897 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.841929 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842006 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842022 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842056 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842100 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842115 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842131 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842155 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842752 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842815 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842916 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.843437 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.842763 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.943274 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.943321 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.943340 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.943368 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.943393 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.943508 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.943551 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.944373 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.944504 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:41 crc kubenswrapper[5004]: I1208 18:54:41.944566 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.764032 5004 generic.go:358] "Generic (PLEG): container finished" podID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" containerID="b697319b66f153c47aa1815340f4c74abc5258cac6c6a680c4dad9d564fe1999" exitCode=0 Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.764117 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"ffcdf638-173d-4a35-9fb6-01cb9844af6a","Type":"ContainerDied","Data":"b697319b66f153c47aa1815340f4c74abc5258cac6c6a680c4dad9d564fe1999"} Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.765152 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.767666 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.769181 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.769968 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62" exitCode=0 Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.769990 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911" exitCode=0 Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.769996 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894" exitCode=0 Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.770002 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7" exitCode=2 Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.770031 5004 scope.go:117] "RemoveContainer" containerID="43241b3672e4532d245751b9b9e81dcd61108d13cf842eeb449275914a06f209" Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.772519 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84b8ff8d65-wpv7g_385ae430-6acc-4039-bc95-e19b4f69f5aa/controller-manager/0.log" Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.772558 5004 generic.go:358] "Generic (PLEG): container finished" podID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerID="51df531476d0d274246b659fd909d424c5a21e27b007f945b21b9c320caf11bc" exitCode=255 Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.772630 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" event={"ID":"385ae430-6acc-4039-bc95-e19b4f69f5aa","Type":"ContainerDied","Data":"51df531476d0d274246b659fd909d424c5a21e27b007f945b21b9c320caf11bc"} Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.773131 5004 scope.go:117] "RemoveContainer" containerID="51df531476d0d274246b659fd909d424c5a21e27b007f945b21b9c320caf11bc" Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.775417 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:42 crc kubenswrapper[5004]: I1208 18:54:42.775768 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:42 crc kubenswrapper[5004]: E1208 18:54:42.813331 5004 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events/controller-manager-84b8ff8d65-wpv7g.187f524ef653f72a\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-84b8ff8d65-wpv7g.187f524ef653f72a openshift-controller-manager 39154 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-84b8ff8d65-wpv7g,UID:385ae430-6acc-4039-bc95-e19b4f69f5aa,APIVersion:v1,ResourceVersion:39142,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:54:38 +0000 UTC,LastTimestamp:2025-12-08 18:54:42.812403946 +0000 UTC m=+216.461312264,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:54:43 crc kubenswrapper[5004]: I1208 18:54:43.781427 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84b8ff8d65-wpv7g_385ae430-6acc-4039-bc95-e19b4f69f5aa/controller-manager/0.log" Dec 08 18:54:43 crc kubenswrapper[5004]: I1208 18:54:43.781949 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" event={"ID":"385ae430-6acc-4039-bc95-e19b4f69f5aa","Type":"ContainerStarted","Data":"b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41"} Dec 08 18:54:43 crc kubenswrapper[5004]: I1208 18:54:43.782747 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:54:43 crc kubenswrapper[5004]: I1208 18:54:43.785039 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:43 crc kubenswrapper[5004]: I1208 18:54:43.785940 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:43 crc kubenswrapper[5004]: I1208 18:54:43.788470 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.184773 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.185966 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.186764 5004 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.187446 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.187872 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.188365 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.188788 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.189039 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.189301 5004 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.271925 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kube-api-access\") pod \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.272401 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.272581 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-var-lock\") pod \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.272718 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.272844 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kubelet-dir\") pod \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\" (UID: \"ffcdf638-173d-4a35-9fb6-01cb9844af6a\") " Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.272638 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-var-lock" (OuterVolumeSpecName: "var-lock") pod "ffcdf638-173d-4a35-9fb6-01cb9844af6a" (UID: "ffcdf638-173d-4a35-9fb6-01cb9844af6a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.272963 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.272651 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.272918 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ffcdf638-173d-4a35-9fb6-01cb9844af6a" (UID: "ffcdf638-173d-4a35-9fb6-01cb9844af6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.273147 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.273289 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.273329 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.273426 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.273484 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.273822 5004 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.273928 5004 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.274002 5004 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.274068 5004 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.274183 5004 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.274253 5004 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffcdf638-173d-4a35-9fb6-01cb9844af6a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.280302 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ffcdf638-173d-4a35-9fb6-01cb9844af6a" (UID: "ffcdf638-173d-4a35-9fb6-01cb9844af6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.281335 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.376109 5004 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.376140 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcdf638-173d-4a35-9fb6-01cb9844af6a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.720998 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.783044 5004 patch_prober.go:28] interesting pod/controller-manager-84b8ff8d65-wpv7g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.783172 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.804211 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.805492 5004 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1" exitCode=0 Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.805693 5004 scope.go:117] "RemoveContainer" containerID="79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.805698 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.806356 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.806530 5004 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.808548 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.811915 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"ffcdf638-173d-4a35-9fb6-01cb9844af6a","Type":"ContainerDied","Data":"9f4ac44dc530636dfb371ef7124334446639ee53c5778fdec9adfe287d6c0958"} Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.811963 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f4ac44dc530636dfb371ef7124334446639ee53c5778fdec9adfe287d6c0958" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.812368 5004 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.812447 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.812804 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.813323 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.818250 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.819550 5004 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.820624 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.829469 5004 scope.go:117] "RemoveContainer" containerID="752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.849223 5004 scope.go:117] "RemoveContainer" containerID="d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.869793 5004 scope.go:117] "RemoveContainer" containerID="02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.897573 5004 scope.go:117] "RemoveContainer" containerID="54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1" Dec 08 18:54:44 crc kubenswrapper[5004]: I1208 18:54:44.938698 5004 scope.go:117] "RemoveContainer" containerID="c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.016655 5004 scope.go:117] "RemoveContainer" containerID="79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.017257 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62\": container with ID starting with 79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62 not found: ID does not exist" containerID="79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.017330 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62"} err="failed to get container status \"79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62\": rpc error: code = NotFound desc = could not find container \"79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62\": container with ID starting with 79d91468b458d3045f62d03630b45d50675b06c340a9196e5893405f67dd7f62 not found: ID does not exist" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.017365 5004 scope.go:117] "RemoveContainer" containerID="752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.017759 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\": container with ID starting with 752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911 not found: ID does not exist" containerID="752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.017800 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911"} err="failed to get container status \"752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\": rpc error: code = NotFound desc = could not find container \"752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911\": container with ID starting with 752264c00d4c0eb9909ff2e9cc1fb313b4be4d1d66fc2812e801e62afac79911 not found: ID does not exist" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.017819 5004 scope.go:117] "RemoveContainer" containerID="d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.018322 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\": container with ID starting with d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894 not found: ID does not exist" containerID="d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.018342 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894"} err="failed to get container status \"d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\": rpc error: code = NotFound desc = could not find container \"d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894\": container with ID starting with d5bfa9856f46e16959f0e43a4d955f40471c5c05f098d9515d79e3a3405d0894 not found: ID does not exist" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.018378 5004 scope.go:117] "RemoveContainer" containerID="02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.019462 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\": container with ID starting with 02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7 not found: ID does not exist" containerID="02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.019493 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7"} err="failed to get container status \"02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\": rpc error: code = NotFound desc = could not find container \"02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7\": container with ID starting with 02e6c84b7a70394eda2af56e35bc6050e0716312ea0c7c329e952297d81b88d7 not found: ID does not exist" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.019519 5004 scope.go:117] "RemoveContainer" containerID="54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.019720 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\": container with ID starting with 54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1 not found: ID does not exist" containerID="54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.019753 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1"} err="failed to get container status \"54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\": rpc error: code = NotFound desc = could not find container \"54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1\": container with ID starting with 54a52f3f3fb5f76a2c7aaf8c9e0e1575239f807f46e2fb2cbdcdbc4d91dc07f1 not found: ID does not exist" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.019807 5004 scope.go:117] "RemoveContainer" containerID="c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.020294 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\": container with ID starting with c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a not found: ID does not exist" containerID="c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.020378 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a"} err="failed to get container status \"c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\": rpc error: code = NotFound desc = could not find container \"c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a\": container with ID starting with c092938427e9433fffbb731b1eedc8a643db8c7966befe0cdbeb734aa7c9315a not found: ID does not exist" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.446228 5004 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events/controller-manager-84b8ff8d65-wpv7g.187f524ef653f72a\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-84b8ff8d65-wpv7g.187f524ef653f72a openshift-controller-manager 39154 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-84b8ff8d65-wpv7g,UID:385ae430-6acc-4039-bc95-e19b4f69f5aa,APIVersion:v1,ResourceVersion:39142,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:54:38 +0000 UTC,LastTimestamp:2025-12-08 18:54:42.812403946 +0000 UTC m=+216.461312264,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.550563 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:54:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:54:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:54:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:54:45Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.550847 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.551021 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.551240 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.551591 5004 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:45 crc kubenswrapper[5004]: E1208 18:54:45.551631 5004 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.813531 5004 patch_prober.go:28] interesting pod/controller-manager-84b8ff8d65-wpv7g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:54:45 crc kubenswrapper[5004]: I1208 18:54:45.813698 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:54:46 crc kubenswrapper[5004]: I1208 18:54:46.714653 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:46 crc kubenswrapper[5004]: I1208 18:54:46.715588 5004 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:46 crc kubenswrapper[5004]: I1208 18:54:46.716332 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:46 crc kubenswrapper[5004]: E1208 18:54:46.716790 5004 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:46 crc kubenswrapper[5004]: I1208 18:54:46.717176 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:46 crc kubenswrapper[5004]: W1208 18:54:46.741988 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-5da90212ff526d6f86c13056c0aa6342af62b8bfbbc510a4c53ba492a213ab58 WatchSource:0}: Error finding container 5da90212ff526d6f86c13056c0aa6342af62b8bfbbc510a4c53ba492a213ab58: Status 404 returned error can't find the container with id 5da90212ff526d6f86c13056c0aa6342af62b8bfbbc510a4c53ba492a213ab58 Dec 08 18:54:46 crc kubenswrapper[5004]: I1208 18:54:46.828203 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"5da90212ff526d6f86c13056c0aa6342af62b8bfbbc510a4c53ba492a213ab58"} Dec 08 18:54:47 crc kubenswrapper[5004]: I1208 18:54:47.841827 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce"} Dec 08 18:54:47 crc kubenswrapper[5004]: I1208 18:54:47.842162 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:47 crc kubenswrapper[5004]: I1208 18:54:47.842671 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:47 crc kubenswrapper[5004]: E1208 18:54:47.842724 5004 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:47 crc kubenswrapper[5004]: I1208 18:54:47.842853 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:48 crc kubenswrapper[5004]: I1208 18:54:48.848596 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:48 crc kubenswrapper[5004]: E1208 18:54:48.850252 5004 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:54:51 crc kubenswrapper[5004]: E1208 18:54:51.237139 5004 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:51 crc kubenswrapper[5004]: E1208 18:54:51.238620 5004 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:51 crc kubenswrapper[5004]: E1208 18:54:51.238944 5004 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:51 crc kubenswrapper[5004]: E1208 18:54:51.239169 5004 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:51 crc kubenswrapper[5004]: E1208 18:54:51.239334 5004 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:51 crc kubenswrapper[5004]: I1208 18:54:51.239358 5004 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 18:54:51 crc kubenswrapper[5004]: E1208 18:54:51.239533 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Dec 08 18:54:51 crc kubenswrapper[5004]: E1208 18:54:51.440179 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Dec 08 18:54:51 crc kubenswrapper[5004]: E1208 18:54:51.841002 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Dec 08 18:54:52 crc kubenswrapper[5004]: E1208 18:54:52.642120 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Dec 08 18:54:53 crc kubenswrapper[5004]: I1208 18:54:53.709689 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:53 crc kubenswrapper[5004]: I1208 18:54:53.711467 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:53 crc kubenswrapper[5004]: I1208 18:54:53.712528 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:53 crc kubenswrapper[5004]: I1208 18:54:53.729050 5004 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:54:53 crc kubenswrapper[5004]: I1208 18:54:53.729129 5004 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:54:53 crc kubenswrapper[5004]: E1208 18:54:53.729629 5004 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:53 crc kubenswrapper[5004]: I1208 18:54:53.729993 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:53 crc kubenswrapper[5004]: I1208 18:54:53.878815 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3320ae77b4b49742fb146ee55ae276b2064adacfdd8f796586513a6ff2250cff"} Dec 08 18:54:54 crc kubenswrapper[5004]: E1208 18:54:54.243210 5004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Dec 08 18:54:54 crc kubenswrapper[5004]: I1208 18:54:54.885463 5004 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="86a21ce25d72670529960bb9ef742bea6bf088242be72c408670cf6cca0b6e03" exitCode=0 Dec 08 18:54:54 crc kubenswrapper[5004]: I1208 18:54:54.885690 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"86a21ce25d72670529960bb9ef742bea6bf088242be72c408670cf6cca0b6e03"} Dec 08 18:54:54 crc kubenswrapper[5004]: I1208 18:54:54.886340 5004 status_manager.go:895] "Failed to get status for pod" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:54 crc kubenswrapper[5004]: I1208 18:54:54.886739 5004 status_manager.go:895] "Failed to get status for pod" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84b8ff8d65-wpv7g\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 08 18:54:54 crc kubenswrapper[5004]: I1208 18:54:54.886929 5004 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:54:54 crc kubenswrapper[5004]: I1208 18:54:54.887008 5004 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:54:54 crc kubenswrapper[5004]: E1208 18:54:54.887370 5004 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.813871 5004 patch_prober.go:28] interesting pod/controller-manager-84b8ff8d65-wpv7g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.814250 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.921308 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ae4352304f4e6544c9a25a1347010f649d4c70d5f1fdef5f1f1d447828f9cfa5"} Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.921358 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"116c898dff8ca7b2453d4a58bd314f4b84cf2bf56b1d591e993681459dbd52d2"} Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.921370 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"bf4111d22344b0cde58a3550e8453c6422ba98bb8739dc05a33e539afdf7515e"} Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.921382 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"bb231468f32bae00354ceaf7af09edaf33256d681603e416d6c291299949f70c"} Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.926096 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.926165 5004 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b" exitCode=1 Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.926247 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b"} Dec 08 18:54:55 crc kubenswrapper[5004]: I1208 18:54:55.927088 5004 scope.go:117] "RemoveContainer" containerID="49cf44a5c7656e9efaf5e979ca46ec2766a1e60f5bb798d7f18f0c1c3c59a50b" Dec 08 18:54:56 crc kubenswrapper[5004]: I1208 18:54:56.840543 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:54:56 crc kubenswrapper[5004]: I1208 18:54:56.935614 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"334b35f94fd30f7d04ff15ef89b8bef7eab38a129c8347f12802a7ac733d1a68"} Dec 08 18:54:56 crc kubenswrapper[5004]: I1208 18:54:56.936302 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:56 crc kubenswrapper[5004]: I1208 18:54:56.935700 5004 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:54:56 crc kubenswrapper[5004]: I1208 18:54:56.936437 5004 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:54:56 crc kubenswrapper[5004]: I1208 18:54:56.938267 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:54:56 crc kubenswrapper[5004]: I1208 18:54:56.938351 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6d724fe7ba063cd28d3abf3c1285cd1730b2d1aead455beed5fb1f38d1fc6c79"} Dec 08 18:54:58 crc kubenswrapper[5004]: I1208 18:54:58.730142 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:58 crc kubenswrapper[5004]: I1208 18:54:58.730497 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:58 crc kubenswrapper[5004]: I1208 18:54:58.735903 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.287980 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" podUID="9296f49b-35cb-4c66-afc5-a62a45480f3a" containerName="oauth-openshift" containerID="cri-o://356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def" gracePeriod=15 Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.680404 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.808424 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-service-ca\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.808485 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-ocp-branding-template\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.808537 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-idp-0-file-data\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.808952 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-policies\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809112 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-serving-cert\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809192 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-provider-selection\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809229 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-session\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809258 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-dir\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809264 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809330 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-cliconfig\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809368 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpwxv\" (UniqueName: \"kubernetes.io/projected/9296f49b-35cb-4c66-afc5-a62a45480f3a-kube-api-access-jpwxv\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809389 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-router-certs\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809440 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-error\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809480 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-trusted-ca-bundle\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809497 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809537 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-login\") pod \"9296f49b-35cb-4c66-afc5-a62a45480f3a\" (UID: \"9296f49b-35cb-4c66-afc5-a62a45480f3a\") " Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.809792 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.810018 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.810045 5004 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.810061 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.810090 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.810459 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.815614 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.816216 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.816493 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9296f49b-35cb-4c66-afc5-a62a45480f3a-kube-api-access-jpwxv" (OuterVolumeSpecName: "kube-api-access-jpwxv") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "kube-api-access-jpwxv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.817424 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.818039 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.819504 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.819755 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.819931 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.820738 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9296f49b-35cb-4c66-afc5-a62a45480f3a" (UID: "9296f49b-35cb-4c66-afc5-a62a45480f3a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911033 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jpwxv\" (UniqueName: \"kubernetes.io/projected/9296f49b-35cb-4c66-afc5-a62a45480f3a-kube-api-access-jpwxv\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911088 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911107 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911120 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911130 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911139 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911148 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911156 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911169 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911183 5004 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9296f49b-35cb-4c66-afc5-a62a45480f3a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.911197 5004 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9296f49b-35cb-4c66-afc5-a62a45480f3a-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.962896 5004 generic.go:358] "Generic (PLEG): container finished" podID="9296f49b-35cb-4c66-afc5-a62a45480f3a" containerID="356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def" exitCode=0 Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.962945 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" event={"ID":"9296f49b-35cb-4c66-afc5-a62a45480f3a","Type":"ContainerDied","Data":"356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def"} Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.962972 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" event={"ID":"9296f49b-35cb-4c66-afc5-a62a45480f3a","Type":"ContainerDied","Data":"cca8740312b05bd64958afbbc7849ce6e9c7be1e397fa5f69d2ceb669ebc41cc"} Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.962987 5004 scope.go:117] "RemoveContainer" containerID="356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.962998 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.991578 5004 scope.go:117] "RemoveContainer" containerID="356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def" Dec 08 18:55:00 crc kubenswrapper[5004]: E1208 18:55:00.992031 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def\": container with ID starting with 356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def not found: ID does not exist" containerID="356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def" Dec 08 18:55:00 crc kubenswrapper[5004]: I1208 18:55:00.992219 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def"} err="failed to get container status \"356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def\": rpc error: code = NotFound desc = could not find container \"356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def\": container with ID starting with 356c920a21228f87a643c883f5eb1bf1354abc68facee8d6315da74e42937def not found: ID does not exist" Dec 08 18:55:01 crc kubenswrapper[5004]: I1208 18:55:01.000103 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:55:01 crc kubenswrapper[5004]: I1208 18:55:01.000192 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:55:01 crc kubenswrapper[5004]: I1208 18:55:01.550657 5004 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-r4pkx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:55:01 crc kubenswrapper[5004]: I1208 18:55:01.551150 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-r4pkx" podUID="9296f49b-35cb-4c66-afc5-a62a45480f3a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:55:01 crc kubenswrapper[5004]: I1208 18:55:01.955636 5004 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:01 crc kubenswrapper[5004]: I1208 18:55:01.955668 5004 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:02 crc kubenswrapper[5004]: I1208 18:55:02.009606 5004 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="5cd1bd0b-433b-42f9-92db-659ee277bea5" Dec 08 18:55:02 crc kubenswrapper[5004]: I1208 18:55:02.988377 5004 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:55:02 crc kubenswrapper[5004]: I1208 18:55:02.992642 5004 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:55:02 crc kubenswrapper[5004]: I1208 18:55:02.994734 5004 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="5cd1bd0b-433b-42f9-92db-659ee277bea5" Dec 08 18:55:02 crc kubenswrapper[5004]: I1208 18:55:02.995996 5004 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://bb231468f32bae00354ceaf7af09edaf33256d681603e416d6c291299949f70c" Dec 08 18:55:02 crc kubenswrapper[5004]: I1208 18:55:02.996147 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:03 crc kubenswrapper[5004]: I1208 18:55:03.992281 5004 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:55:03 crc kubenswrapper[5004]: I1208 18:55:03.992630 5004 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:55:03 crc kubenswrapper[5004]: I1208 18:55:03.997211 5004 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="5cd1bd0b-433b-42f9-92db-659ee277bea5" Dec 08 18:55:04 crc kubenswrapper[5004]: I1208 18:55:04.745489 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:55:04 crc kubenswrapper[5004]: I1208 18:55:04.750206 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:55:04 crc kubenswrapper[5004]: I1208 18:55:04.997023 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:55:05 crc kubenswrapper[5004]: I1208 18:55:05.815131 5004 patch_prober.go:28] interesting pod/controller-manager-84b8ff8d65-wpv7g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:55:05 crc kubenswrapper[5004]: I1208 18:55:05.815657 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:55:06 crc kubenswrapper[5004]: I1208 18:55:06.012830 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:55:12 crc kubenswrapper[5004]: I1208 18:55:12.017576 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 18:55:12 crc kubenswrapper[5004]: I1208 18:55:12.422229 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 18:55:12 crc kubenswrapper[5004]: I1208 18:55:12.442460 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 18:55:12 crc kubenswrapper[5004]: I1208 18:55:12.618229 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 18:55:12 crc kubenswrapper[5004]: I1208 18:55:12.640046 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 18:55:12 crc kubenswrapper[5004]: I1208 18:55:12.685375 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 18:55:12 crc kubenswrapper[5004]: I1208 18:55:12.750765 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 18:55:12 crc kubenswrapper[5004]: I1208 18:55:12.826223 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 18:55:13 crc kubenswrapper[5004]: I1208 18:55:13.128041 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 18:55:13 crc kubenswrapper[5004]: I1208 18:55:13.401699 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 18:55:13 crc kubenswrapper[5004]: I1208 18:55:13.402445 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 18:55:13 crc kubenswrapper[5004]: I1208 18:55:13.678286 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:13 crc kubenswrapper[5004]: I1208 18:55:13.833521 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 18:55:13 crc kubenswrapper[5004]: I1208 18:55:13.875525 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 18:55:13 crc kubenswrapper[5004]: I1208 18:55:13.884889 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 18:55:13 crc kubenswrapper[5004]: I1208 18:55:13.891978 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.053130 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84b8ff8d65-wpv7g_385ae430-6acc-4039-bc95-e19b4f69f5aa/controller-manager/1.log" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.053662 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84b8ff8d65-wpv7g_385ae430-6acc-4039-bc95-e19b4f69f5aa/controller-manager/0.log" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.053693 5004 generic.go:358] "Generic (PLEG): container finished" podID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerID="b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41" exitCode=255 Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.053817 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" event={"ID":"385ae430-6acc-4039-bc95-e19b4f69f5aa","Type":"ContainerDied","Data":"b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41"} Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.053855 5004 scope.go:117] "RemoveContainer" containerID="51df531476d0d274246b659fd909d424c5a21e27b007f945b21b9c320caf11bc" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.054318 5004 scope.go:117] "RemoveContainer" containerID="b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41" Dec 08 18:55:14 crc kubenswrapper[5004]: E1208 18:55:14.054528 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-84b8ff8d65-wpv7g_openshift-controller-manager(385ae430-6acc-4039-bc95-e19b4f69f5aa)\"" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.203517 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.244270 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.285551 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.399997 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.590593 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.603119 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.625998 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.665384 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.789548 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.867333 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 18:55:14 crc kubenswrapper[5004]: I1208 18:55:14.881767 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.008280 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.063365 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84b8ff8d65-wpv7g_385ae430-6acc-4039-bc95-e19b4f69f5aa/controller-manager/1.log" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.199994 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.353171 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.388793 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.506116 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.629378 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.635067 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.841878 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.916759 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 18:55:15 crc kubenswrapper[5004]: I1208 18:55:15.973905 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.062256 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.070896 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.071828 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.158676 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.274283 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.428489 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.537375 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.539033 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.575281 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.652974 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.696658 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.721647 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.770377 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.778504 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.778811 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.806593 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.835264 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 18:55:16 crc kubenswrapper[5004]: I1208 18:55:16.843641 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.054196 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.055703 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.076790 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.132141 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.154198 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.303607 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.416961 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.592734 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.612553 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.684871 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.700269 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.719298 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.789163 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.828035 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.831648 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.832615 5004 scope.go:117] "RemoveContainer" containerID="b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41" Dec 08 18:55:17 crc kubenswrapper[5004]: E1208 18:55:17.832970 5004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-84b8ff8d65-wpv7g_openshift-controller-manager(385ae430-6acc-4039-bc95-e19b4f69f5aa)\"" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.840289 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.877642 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.960914 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 18:55:17 crc kubenswrapper[5004]: I1208 18:55:17.967672 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.007593 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.148904 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.271311 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.286937 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.328696 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.340279 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.447276 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.466886 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.521641 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.531177 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.561612 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.571597 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.610127 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.615385 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.677161 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.757048 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.788680 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.909185 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 18:55:18 crc kubenswrapper[5004]: I1208 18:55:18.983332 5004 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.007005 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.092103 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.186467 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.210298 5004 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.326026 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.344750 5004 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.418754 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.506610 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.565310 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.702289 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.829051 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.846513 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 18:55:19 crc kubenswrapper[5004]: I1208 18:55:19.944067 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.007777 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.016349 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.040971 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.063063 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.066749 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.070549 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.200959 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.212644 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.250717 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.304032 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.476701 5004 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.490654 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.622925 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.626353 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.715509 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.818686 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.831132 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.862803 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 18:55:20 crc kubenswrapper[5004]: I1208 18:55:20.941458 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.062264 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.077097 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.145192 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.151876 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.164681 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.180752 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.187035 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.363923 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.413821 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.424248 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.525940 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.637799 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.643926 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.672310 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.683502 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.761056 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.800782 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.827555 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.847782 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.944547 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.962635 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.970704 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.973711 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.973889 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.985790 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 18:55:21 crc kubenswrapper[5004]: I1208 18:55:21.997663 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.059934 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.072692 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.128520 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.174805 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.192241 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.345794 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.392352 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.519808 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.582891 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.624764 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.636356 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.748600 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.776406 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.982029 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 18:55:22 crc kubenswrapper[5004]: I1208 18:55:22.998999 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.009715 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.031578 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.095973 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.159097 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.273907 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.301720 5004 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.307605 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r4pkx","openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.307690 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-7f9ff6787-7t4qm"] Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.308332 5004 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.308362 5004 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e72fac8-ae14-48dc-b490-c2ed622b1496" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.308454 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" containerName="installer" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.308472 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" containerName="installer" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.308510 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9296f49b-35cb-4c66-afc5-a62a45480f3a" containerName="oauth-openshift" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.308518 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="9296f49b-35cb-4c66-afc5-a62a45480f3a" containerName="oauth-openshift" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.308644 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="9296f49b-35cb-4c66-afc5-a62a45480f3a" containerName="oauth-openshift" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.308659 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ffcdf638-173d-4a35-9fb6-01cb9844af6a" containerName="installer" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.321303 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.340866 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.340935 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347017 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347152 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347247 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347306 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347357 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347496 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347581 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347528 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347682 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.347718 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.348225 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.348583 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.348583 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.358122 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.358826 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.390375 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.390353031 podStartE2EDuration="22.390353031s" podCreationTimestamp="2025-12-08 18:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:23.385567775 +0000 UTC m=+257.034476083" watchObservedRunningTime="2025-12-08 18:55:23.390353031 +0000 UTC m=+257.039261349" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.395412 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453084 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-session\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453130 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-login\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453176 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwzk\" (UniqueName: \"kubernetes.io/projected/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-kube-api-access-kpwzk\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453200 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453325 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453391 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453457 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-audit-dir\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453522 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453549 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453582 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-audit-policies\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453613 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453649 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453711 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.453773 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-error\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.482738 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.510853 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555245 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kpwzk\" (UniqueName: \"kubernetes.io/projected/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-kube-api-access-kpwzk\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555290 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555325 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555353 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555385 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-audit-dir\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555414 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555431 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555452 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-audit-policies\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555467 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555488 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555504 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555528 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-error\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555555 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-session\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.555573 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-login\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.556423 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-audit-dir\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.557060 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-audit-policies\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.557384 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.557707 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.558583 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.562566 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.562620 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-session\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.562784 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.562767 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-login\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.565150 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.565821 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-template-error\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.570210 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.573647 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.584125 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpwzk\" (UniqueName: \"kubernetes.io/projected/c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a-kube-api-access-kpwzk\") pod \"oauth-openshift-7f9ff6787-7t4qm\" (UID: \"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a\") " pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.664697 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.784225 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.787842 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.802050 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.815556 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.909277 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 18:55:23 crc kubenswrapper[5004]: I1208 18:55:23.912409 5004 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.137901 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.145103 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.202303 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.211615 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.213000 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.270212 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.399063 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.408757 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.424894 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.473417 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.512687 5004 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.513007 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce" gracePeriod=5 Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.716985 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9296f49b-35cb-4c66-afc5-a62a45480f3a" path="/var/lib/kubelet/pods/9296f49b-35cb-4c66-afc5-a62a45480f3a/volumes" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.745179 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.763759 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.776759 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.780373 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.899805 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.900018 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 18:55:24 crc kubenswrapper[5004]: I1208 18:55:24.998541 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.046984 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.047221 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.096974 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.125903 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.214055 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.234978 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.260900 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.321183 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.512687 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.584089 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.607649 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.737629 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.849020 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.867390 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 18:55:25 crc kubenswrapper[5004]: I1208 18:55:25.887784 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 18:55:26 crc kubenswrapper[5004]: I1208 18:55:26.391026 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 18:55:26 crc kubenswrapper[5004]: I1208 18:55:26.535680 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 18:55:26 crc kubenswrapper[5004]: I1208 18:55:26.761028 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 18:55:26 crc kubenswrapper[5004]: I1208 18:55:26.907937 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 18:55:26 crc kubenswrapper[5004]: I1208 18:55:26.918292 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.015574 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.145588 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" event={"ID":"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a","Type":"ContainerStarted","Data":"af71468bc86fb5f2f8bdb7e8ef5f61ed946680768054fa3fa8204c4316636a0a"} Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.145877 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" event={"ID":"c21fc0c4-3218-4b8c-a2f0-1ed1ed1d3f0a","Type":"ContainerStarted","Data":"d8f376616b8a2f4850200cf436b4d17a321cae4d58a2e3586abc57e1feedd629"} Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.147154 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.152469 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.152869 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.168731 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f9ff6787-7t4qm" podStartSLOduration=52.168712957 podStartE2EDuration="52.168712957s" podCreationTimestamp="2025-12-08 18:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:27.166255787 +0000 UTC m=+260.815164115" watchObservedRunningTime="2025-12-08 18:55:27.168712957 +0000 UTC m=+260.817621265" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.305597 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.334154 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.459450 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.635644 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.758845 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 18:55:27 crc kubenswrapper[5004]: I1208 18:55:27.823735 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:55:28 crc kubenswrapper[5004]: I1208 18:55:28.554716 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 18:55:28 crc kubenswrapper[5004]: I1208 18:55:28.673431 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 18:55:28 crc kubenswrapper[5004]: I1208 18:55:28.899639 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 18:55:29 crc kubenswrapper[5004]: I1208 18:55:29.111599 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 18:55:29 crc kubenswrapper[5004]: I1208 18:55:29.277720 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 18:55:29 crc kubenswrapper[5004]: I1208 18:55:29.290556 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:55:29 crc kubenswrapper[5004]: I1208 18:55:29.312539 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 18:55:29 crc kubenswrapper[5004]: I1208 18:55:29.454402 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 18:55:29 crc kubenswrapper[5004]: I1208 18:55:29.514920 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.086857 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.087179 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.088631 5004 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.117006 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.161745 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.161792 5004 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce" exitCode=137 Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.161989 5004 scope.go:117] "RemoveContainer" containerID="a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.162435 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.177761 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.177984 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.178132 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.178307 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.178390 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.178409 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.178509 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.178593 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.178603 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.178982 5004 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.179005 5004 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.179018 5004 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.179029 5004 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.186208 5004 scope.go:117] "RemoveContainer" containerID="a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce" Dec 08 18:55:30 crc kubenswrapper[5004]: E1208 18:55:30.186640 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce\": container with ID starting with a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce not found: ID does not exist" containerID="a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.186685 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce"} err="failed to get container status \"a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce\": rpc error: code = NotFound desc = could not find container \"a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce\": container with ID starting with a1dcc08f21bd63463f90307bfd355971947649ad6f2728c8118366e1a89d17ce not found: ID does not exist" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.199935 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.280391 5004 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.481444 5004 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.595915 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.760942 5004 scope.go:117] "RemoveContainer" containerID="b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41" Dec 08 18:55:30 crc kubenswrapper[5004]: I1208 18:55:30.773706 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 18:55:31 crc kubenswrapper[5004]: I1208 18:55:31.001869 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:55:31 crc kubenswrapper[5004]: I1208 18:55:31.001943 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:55:31 crc kubenswrapper[5004]: I1208 18:55:31.172275 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84b8ff8d65-wpv7g_385ae430-6acc-4039-bc95-e19b4f69f5aa/controller-manager/1.log" Dec 08 18:55:31 crc kubenswrapper[5004]: I1208 18:55:31.172363 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" event={"ID":"385ae430-6acc-4039-bc95-e19b4f69f5aa","Type":"ContainerStarted","Data":"cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760"} Dec 08 18:55:31 crc kubenswrapper[5004]: I1208 18:55:31.172777 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:55:31 crc kubenswrapper[5004]: I1208 18:55:31.388337 5004 ???:1] "http: TLS handshake error from 192.168.126.11:53924: no serving certificate available for the kubelet" Dec 08 18:55:31 crc kubenswrapper[5004]: I1208 18:55:31.507441 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:55:31 crc kubenswrapper[5004]: I1208 18:55:31.592919 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 18:55:32 crc kubenswrapper[5004]: I1208 18:55:32.138136 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 18:55:36 crc kubenswrapper[5004]: I1208 18:55:36.605264 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g"] Dec 08 18:55:36 crc kubenswrapper[5004]: I1208 18:55:36.605581 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" containerID="cri-o://cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760" gracePeriod=30 Dec 08 18:55:36 crc kubenswrapper[5004]: I1208 18:55:36.622680 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4"] Dec 08 18:55:36 crc kubenswrapper[5004]: I1208 18:55:36.622943 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" podUID="4938969e-b368-4aa2-ab42-5ff95af63309" containerName="route-controller-manager" containerID="cri-o://8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb" gracePeriod=30 Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.015445 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84b8ff8d65-wpv7g_385ae430-6acc-4039-bc95-e19b4f69f5aa/controller-manager/1.log" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.015778 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.022902 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.056314 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-952d9"] Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057351 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057375 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057413 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057421 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057435 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057441 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057452 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4938969e-b368-4aa2-ab42-5ff95af63309" containerName="route-controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057458 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="4938969e-b368-4aa2-ab42-5ff95af63309" containerName="route-controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057640 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="4938969e-b368-4aa2-ab42-5ff95af63309" containerName="route-controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057664 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057673 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057685 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.057695 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.064041 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-952d9"] Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.064275 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069294 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-client-ca\") pod \"4938969e-b368-4aa2-ab42-5ff95af63309\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069353 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4938969e-b368-4aa2-ab42-5ff95af63309-tmp\") pod \"4938969e-b368-4aa2-ab42-5ff95af63309\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069395 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385ae430-6acc-4039-bc95-e19b4f69f5aa-serving-cert\") pod \"385ae430-6acc-4039-bc95-e19b4f69f5aa\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069415 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4938969e-b368-4aa2-ab42-5ff95af63309-serving-cert\") pod \"4938969e-b368-4aa2-ab42-5ff95af63309\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069454 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfkfd\" (UniqueName: \"kubernetes.io/projected/4938969e-b368-4aa2-ab42-5ff95af63309-kube-api-access-zfkfd\") pod \"4938969e-b368-4aa2-ab42-5ff95af63309\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069484 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-config\") pod \"385ae430-6acc-4039-bc95-e19b4f69f5aa\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069515 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/385ae430-6acc-4039-bc95-e19b4f69f5aa-tmp\") pod \"385ae430-6acc-4039-bc95-e19b4f69f5aa\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069539 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-proxy-ca-bundles\") pod \"385ae430-6acc-4039-bc95-e19b4f69f5aa\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069577 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcstg\" (UniqueName: \"kubernetes.io/projected/385ae430-6acc-4039-bc95-e19b4f69f5aa-kube-api-access-mcstg\") pod \"385ae430-6acc-4039-bc95-e19b4f69f5aa\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069632 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-config\") pod \"4938969e-b368-4aa2-ab42-5ff95af63309\" (UID: \"4938969e-b368-4aa2-ab42-5ff95af63309\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.069674 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-client-ca\") pod \"385ae430-6acc-4039-bc95-e19b4f69f5aa\" (UID: \"385ae430-6acc-4039-bc95-e19b4f69f5aa\") " Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.070188 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4938969e-b368-4aa2-ab42-5ff95af63309-tmp" (OuterVolumeSpecName: "tmp") pod "4938969e-b368-4aa2-ab42-5ff95af63309" (UID: "4938969e-b368-4aa2-ab42-5ff95af63309"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.070724 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-client-ca" (OuterVolumeSpecName: "client-ca") pod "4938969e-b368-4aa2-ab42-5ff95af63309" (UID: "4938969e-b368-4aa2-ab42-5ff95af63309"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.071471 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-config" (OuterVolumeSpecName: "config") pod "4938969e-b368-4aa2-ab42-5ff95af63309" (UID: "4938969e-b368-4aa2-ab42-5ff95af63309"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.072036 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "385ae430-6acc-4039-bc95-e19b4f69f5aa" (UID: "385ae430-6acc-4039-bc95-e19b4f69f5aa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.072104 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "385ae430-6acc-4039-bc95-e19b4f69f5aa" (UID: "385ae430-6acc-4039-bc95-e19b4f69f5aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.072420 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.072449 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.072463 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.072480 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4938969e-b368-4aa2-ab42-5ff95af63309-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.072495 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4938969e-b368-4aa2-ab42-5ff95af63309-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.072803 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/385ae430-6acc-4039-bc95-e19b4f69f5aa-tmp" (OuterVolumeSpecName: "tmp") pod "385ae430-6acc-4039-bc95-e19b4f69f5aa" (UID: "385ae430-6acc-4039-bc95-e19b4f69f5aa"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.081807 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-config" (OuterVolumeSpecName: "config") pod "385ae430-6acc-4039-bc95-e19b4f69f5aa" (UID: "385ae430-6acc-4039-bc95-e19b4f69f5aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.094200 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385ae430-6acc-4039-bc95-e19b4f69f5aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "385ae430-6acc-4039-bc95-e19b4f69f5aa" (UID: "385ae430-6acc-4039-bc95-e19b4f69f5aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.098122 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4938969e-b368-4aa2-ab42-5ff95af63309-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4938969e-b368-4aa2-ab42-5ff95af63309" (UID: "4938969e-b368-4aa2-ab42-5ff95af63309"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.098510 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385ae430-6acc-4039-bc95-e19b4f69f5aa-kube-api-access-mcstg" (OuterVolumeSpecName: "kube-api-access-mcstg") pod "385ae430-6acc-4039-bc95-e19b4f69f5aa" (UID: "385ae430-6acc-4039-bc95-e19b4f69f5aa"). InnerVolumeSpecName "kube-api-access-mcstg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.110665 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4938969e-b368-4aa2-ab42-5ff95af63309-kube-api-access-zfkfd" (OuterVolumeSpecName: "kube-api-access-zfkfd") pod "4938969e-b368-4aa2-ab42-5ff95af63309" (UID: "4938969e-b368-4aa2-ab42-5ff95af63309"). InnerVolumeSpecName "kube-api-access-zfkfd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.118597 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx"] Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.119790 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.125161 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerName="controller-manager" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.133955 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx"] Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.134159 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.173919 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-proxy-ca-bundles\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.173968 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-client-ca\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.173988 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9eb2fd1b-4381-4505-b84f-fd98014b84cb-tmp\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174018 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eb2fd1b-4381-4505-b84f-fd98014b84cb-serving-cert\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174042 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fr5t\" (UniqueName: \"kubernetes.io/projected/9eb2fd1b-4381-4505-b84f-fd98014b84cb-kube-api-access-7fr5t\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174062 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-tmp\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174123 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-client-ca\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174165 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-serving-cert\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174193 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-config\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174220 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-config\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174237 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zl9v\" (UniqueName: \"kubernetes.io/projected/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-kube-api-access-2zl9v\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174284 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385ae430-6acc-4039-bc95-e19b4f69f5aa-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174421 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4938969e-b368-4aa2-ab42-5ff95af63309-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174492 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zfkfd\" (UniqueName: \"kubernetes.io/projected/4938969e-b368-4aa2-ab42-5ff95af63309-kube-api-access-zfkfd\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174511 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385ae430-6acc-4039-bc95-e19b4f69f5aa-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174530 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/385ae430-6acc-4039-bc95-e19b4f69f5aa-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.174542 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mcstg\" (UniqueName: \"kubernetes.io/projected/385ae430-6acc-4039-bc95-e19b4f69f5aa-kube-api-access-mcstg\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.220950 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84b8ff8d65-wpv7g_385ae430-6acc-4039-bc95-e19b4f69f5aa/controller-manager/1.log" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.221313 5004 generic.go:358] "Generic (PLEG): container finished" podID="385ae430-6acc-4039-bc95-e19b4f69f5aa" containerID="cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760" exitCode=0 Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.221555 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.221573 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" event={"ID":"385ae430-6acc-4039-bc95-e19b4f69f5aa","Type":"ContainerDied","Data":"cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760"} Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.221806 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g" event={"ID":"385ae430-6acc-4039-bc95-e19b4f69f5aa","Type":"ContainerDied","Data":"c624b31a036064b022e14e890f0db60c193648dca05ac01c1ba6857e793174bd"} Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.221865 5004 scope.go:117] "RemoveContainer" containerID="cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.223784 5004 generic.go:358] "Generic (PLEG): container finished" podID="4938969e-b368-4aa2-ab42-5ff95af63309" containerID="8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb" exitCode=0 Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.223930 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" event={"ID":"4938969e-b368-4aa2-ab42-5ff95af63309","Type":"ContainerDied","Data":"8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb"} Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.223953 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" event={"ID":"4938969e-b368-4aa2-ab42-5ff95af63309","Type":"ContainerDied","Data":"2ae84feb2b514f53a6640de6e65759b0215168c0c12bd32028e32c92f7d0ea41"} Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.224114 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.239206 5004 scope.go:117] "RemoveContainer" containerID="b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.268459 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g"] Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275524 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7fr5t\" (UniqueName: \"kubernetes.io/projected/9eb2fd1b-4381-4505-b84f-fd98014b84cb-kube-api-access-7fr5t\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275560 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-tmp\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275590 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-client-ca\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275610 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-serving-cert\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275636 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-config\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275654 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-config\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275672 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zl9v\" (UniqueName: \"kubernetes.io/projected/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-kube-api-access-2zl9v\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275714 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-proxy-ca-bundles\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275730 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-client-ca\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275751 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9eb2fd1b-4381-4505-b84f-fd98014b84cb-tmp\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.275779 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eb2fd1b-4381-4505-b84f-fd98014b84cb-serving-cert\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.276844 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-tmp\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.277671 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9eb2fd1b-4381-4505-b84f-fd98014b84cb-tmp\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.277713 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-client-ca\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.278367 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-config\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.279145 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-proxy-ca-bundles\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.279216 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84b8ff8d65-wpv7g"] Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.280991 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-config\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.281347 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-client-ca\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.282786 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-serving-cert\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.286447 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eb2fd1b-4381-4505-b84f-fd98014b84cb-serving-cert\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.286775 5004 scope.go:117] "RemoveContainer" containerID="cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760" Dec 08 18:55:37 crc kubenswrapper[5004]: E1208 18:55:37.288289 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760\": container with ID starting with cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760 not found: ID does not exist" containerID="cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.288861 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760"} err="failed to get container status \"cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760\": rpc error: code = NotFound desc = could not find container \"cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760\": container with ID starting with cfc8c33927c2aaf13cb579204d726df983b5d030aeec98759d400e73cda06760 not found: ID does not exist" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.288979 5004 scope.go:117] "RemoveContainer" containerID="b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41" Dec 08 18:55:37 crc kubenswrapper[5004]: E1208 18:55:37.289651 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41\": container with ID starting with b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41 not found: ID does not exist" containerID="b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.290356 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41"} err="failed to get container status \"b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41\": rpc error: code = NotFound desc = could not find container \"b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41\": container with ID starting with b232e6e98ab71a72f463f4710f7205092bebe2d561ddd20675b814772af59f41 not found: ID does not exist" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.290840 5004 scope.go:117] "RemoveContainer" containerID="8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.296031 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zl9v\" (UniqueName: \"kubernetes.io/projected/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-kube-api-access-2zl9v\") pod \"controller-manager-8f4776fc7-952d9\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.296128 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4"] Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.303422 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fr5t\" (UniqueName: \"kubernetes.io/projected/9eb2fd1b-4381-4505-b84f-fd98014b84cb-kube-api-access-7fr5t\") pod \"route-controller-manager-7fd99595df-fvzqx\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.310152 5004 scope.go:117] "RemoveContainer" containerID="8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.310953 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcf4ff857-fv2b4"] Dec 08 18:55:37 crc kubenswrapper[5004]: E1208 18:55:37.310952 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb\": container with ID starting with 8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb not found: ID does not exist" containerID="8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.311043 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb"} err="failed to get container status \"8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb\": rpc error: code = NotFound desc = could not find container \"8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb\": container with ID starting with 8e8ad8d2504b36b4ed197b9bf851fe22d044420ff9cfe7d4634e9f6c31b1a4fb not found: ID does not exist" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.420445 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.464809 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.818383 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-952d9"] Dec 08 18:55:37 crc kubenswrapper[5004]: I1208 18:55:37.911597 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx"] Dec 08 18:55:37 crc kubenswrapper[5004]: W1208 18:55:37.931462 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eb2fd1b_4381_4505_b84f_fd98014b84cb.slice/crio-801ed871c9a3b6066940f4f3035e46ed83e83ec2bcec1c4bed103cae00b532f3 WatchSource:0}: Error finding container 801ed871c9a3b6066940f4f3035e46ed83e83ec2bcec1c4bed103cae00b532f3: Status 404 returned error can't find the container with id 801ed871c9a3b6066940f4f3035e46ed83e83ec2bcec1c4bed103cae00b532f3 Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.231653 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" event={"ID":"9eb2fd1b-4381-4505-b84f-fd98014b84cb","Type":"ContainerStarted","Data":"4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5"} Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.231703 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" event={"ID":"9eb2fd1b-4381-4505-b84f-fd98014b84cb","Type":"ContainerStarted","Data":"801ed871c9a3b6066940f4f3035e46ed83e83ec2bcec1c4bed103cae00b532f3"} Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.232049 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.234339 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" event={"ID":"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16","Type":"ContainerStarted","Data":"de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2"} Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.234383 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" event={"ID":"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16","Type":"ContainerStarted","Data":"7c69c6a934e2037bccc662183973a1b1759fb8a89c85977ec0418ff54d6e7c59"} Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.234564 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.278414 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" podStartSLOduration=2.278392954 podStartE2EDuration="2.278392954s" podCreationTimestamp="2025-12-08 18:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:38.259418874 +0000 UTC m=+271.908327182" watchObservedRunningTime="2025-12-08 18:55:38.278392954 +0000 UTC m=+271.927301282" Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.279987 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" podStartSLOduration=2.279976467 podStartE2EDuration="2.279976467s" podCreationTimestamp="2025-12-08 18:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:38.274535706 +0000 UTC m=+271.923444044" watchObservedRunningTime="2025-12-08 18:55:38.279976467 +0000 UTC m=+271.928884785" Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.559465 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.718695 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385ae430-6acc-4039-bc95-e19b4f69f5aa" path="/var/lib/kubelet/pods/385ae430-6acc-4039-bc95-e19b4f69f5aa/volumes" Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.719563 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4938969e-b368-4aa2-ab42-5ff95af63309" path="/var/lib/kubelet/pods/4938969e-b368-4aa2-ab42-5ff95af63309/volumes" Dec 08 18:55:38 crc kubenswrapper[5004]: I1208 18:55:38.720253 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:52 crc kubenswrapper[5004]: I1208 18:55:52.294390 5004 ???:1] "http: TLS handshake error from 192.168.126.11:40422: no serving certificate available for the kubelet" Dec 08 18:55:56 crc kubenswrapper[5004]: I1208 18:55:56.589756 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-952d9"] Dec 08 18:55:56 crc kubenswrapper[5004]: I1208 18:55:56.590167 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" podUID="dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" containerName="controller-manager" containerID="cri-o://de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2" gracePeriod=30 Dec 08 18:55:56 crc kubenswrapper[5004]: I1208 18:55:56.624498 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx"] Dec 08 18:55:56 crc kubenswrapper[5004]: I1208 18:55:56.625627 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" podUID="9eb2fd1b-4381-4505-b84f-fd98014b84cb" containerName="route-controller-manager" containerID="cri-o://4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5" gracePeriod=30 Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.062519 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.068862 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.106903 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl"] Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.108569 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" containerName="controller-manager" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.108599 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" containerName="controller-manager" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.108637 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9eb2fd1b-4381-4505-b84f-fd98014b84cb" containerName="route-controller-manager" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.108645 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb2fd1b-4381-4505-b84f-fd98014b84cb" containerName="route-controller-manager" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.108751 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="9eb2fd1b-4381-4505-b84f-fd98014b84cb" containerName="route-controller-manager" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.108767 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" containerName="controller-manager" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.118159 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl"] Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.118309 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.131137 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5"] Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.137029 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.142579 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5"] Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.190806 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-proxy-ca-bundles\") pod \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.190851 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-client-ca\") pod \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.190876 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-client-ca\") pod \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.190919 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-serving-cert\") pod \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.190959 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9eb2fd1b-4381-4505-b84f-fd98014b84cb-tmp\") pod \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191025 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-tmp\") pod \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191054 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fr5t\" (UniqueName: \"kubernetes.io/projected/9eb2fd1b-4381-4505-b84f-fd98014b84cb-kube-api-access-7fr5t\") pod \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191092 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eb2fd1b-4381-4505-b84f-fd98014b84cb-serving-cert\") pod \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191124 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-config\") pod \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\" (UID: \"9eb2fd1b-4381-4505-b84f-fd98014b84cb\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191169 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zl9v\" (UniqueName: \"kubernetes.io/projected/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-kube-api-access-2zl9v\") pod \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191190 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-config\") pod \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\" (UID: \"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16\") " Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191343 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-proxy-ca-bundles\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191376 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-config\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191410 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-client-ca\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191438 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-serving-cert\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191469 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-config\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191494 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-tmp\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191528 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-serving-cert\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191562 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx29n\" (UniqueName: \"kubernetes.io/projected/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-kube-api-access-jx29n\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191579 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-tmp\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191598 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-client-ca\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.191633 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njq6x\" (UniqueName: \"kubernetes.io/projected/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-kube-api-access-njq6x\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.192504 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-tmp" (OuterVolumeSpecName: "tmp") pod "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" (UID: "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.192741 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" (UID: "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.193098 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" (UID: "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.193468 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb2fd1b-4381-4505-b84f-fd98014b84cb-tmp" (OuterVolumeSpecName: "tmp") pod "9eb2fd1b-4381-4505-b84f-fd98014b84cb" (UID: "9eb2fd1b-4381-4505-b84f-fd98014b84cb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.193551 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-config" (OuterVolumeSpecName: "config") pod "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" (UID: "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.194135 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "9eb2fd1b-4381-4505-b84f-fd98014b84cb" (UID: "9eb2fd1b-4381-4505-b84f-fd98014b84cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.194804 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-config" (OuterVolumeSpecName: "config") pod "9eb2fd1b-4381-4505-b84f-fd98014b84cb" (UID: "9eb2fd1b-4381-4505-b84f-fd98014b84cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.203880 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb2fd1b-4381-4505-b84f-fd98014b84cb-kube-api-access-7fr5t" (OuterVolumeSpecName: "kube-api-access-7fr5t") pod "9eb2fd1b-4381-4505-b84f-fd98014b84cb" (UID: "9eb2fd1b-4381-4505-b84f-fd98014b84cb"). InnerVolumeSpecName "kube-api-access-7fr5t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.205547 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-kube-api-access-2zl9v" (OuterVolumeSpecName: "kube-api-access-2zl9v") pod "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" (UID: "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16"). InnerVolumeSpecName "kube-api-access-2zl9v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.205667 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb2fd1b-4381-4505-b84f-fd98014b84cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9eb2fd1b-4381-4505-b84f-fd98014b84cb" (UID: "9eb2fd1b-4381-4505-b84f-fd98014b84cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.206110 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" (UID: "dd811bc5-ad8c-47a9-b2b8-b36be4f93a16"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.292680 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-config\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293021 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-tmp\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293151 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-serving-cert\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293258 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jx29n\" (UniqueName: \"kubernetes.io/projected/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-kube-api-access-jx29n\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293360 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-tmp\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293437 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-client-ca\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293530 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-njq6x\" (UniqueName: \"kubernetes.io/projected/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-kube-api-access-njq6x\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293872 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-proxy-ca-bundles\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293978 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-config\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.293727 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-tmp\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294175 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-client-ca\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294263 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-serving-cert\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294409 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294478 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7fr5t\" (UniqueName: \"kubernetes.io/projected/9eb2fd1b-4381-4505-b84f-fd98014b84cb-kube-api-access-7fr5t\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294571 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eb2fd1b-4381-4505-b84f-fd98014b84cb-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294641 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294696 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zl9v\" (UniqueName: \"kubernetes.io/projected/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-kube-api-access-2zl9v\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294937 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.295157 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.295484 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.295580 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9eb2fd1b-4381-4505-b84f-fd98014b84cb-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.295657 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.295965 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9eb2fd1b-4381-4505-b84f-fd98014b84cb-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.295110 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-proxy-ca-bundles\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294439 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-client-ca\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.296027 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-config\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.294861 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-client-ca\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.296577 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-tmp\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.298061 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-config\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.302049 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-serving-cert\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.303347 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-serving-cert\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.310307 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-njq6x\" (UniqueName: \"kubernetes.io/projected/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-kube-api-access-njq6x\") pod \"route-controller-manager-5647f8bb7d-4htbl\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.312124 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx29n\" (UniqueName: \"kubernetes.io/projected/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-kube-api-access-jx29n\") pod \"controller-manager-6589bb7b8d-l9cj5\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.357509 5004 generic.go:358] "Generic (PLEG): container finished" podID="9eb2fd1b-4381-4505-b84f-fd98014b84cb" containerID="4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5" exitCode=0 Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.357633 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" event={"ID":"9eb2fd1b-4381-4505-b84f-fd98014b84cb","Type":"ContainerDied","Data":"4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5"} Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.357671 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" event={"ID":"9eb2fd1b-4381-4505-b84f-fd98014b84cb","Type":"ContainerDied","Data":"801ed871c9a3b6066940f4f3035e46ed83e83ec2bcec1c4bed103cae00b532f3"} Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.357694 5004 scope.go:117] "RemoveContainer" containerID="4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.357812 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.362381 5004 generic.go:358] "Generic (PLEG): container finished" podID="dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" containerID="de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2" exitCode=0 Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.362506 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" event={"ID":"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16","Type":"ContainerDied","Data":"de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2"} Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.362527 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" event={"ID":"dd811bc5-ad8c-47a9-b2b8-b36be4f93a16","Type":"ContainerDied","Data":"7c69c6a934e2037bccc662183973a1b1759fb8a89c85977ec0418ff54d6e7c59"} Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.362604 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f4776fc7-952d9" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.376631 5004 scope.go:117] "RemoveContainer" containerID="4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5" Dec 08 18:55:57 crc kubenswrapper[5004]: E1208 18:55:57.377305 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5\": container with ID starting with 4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5 not found: ID does not exist" containerID="4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.377346 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5"} err="failed to get container status \"4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5\": rpc error: code = NotFound desc = could not find container \"4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5\": container with ID starting with 4e2be1690ead3de244facdd03bf0c53a3e0f00b37929700bfab4bec0043403f5 not found: ID does not exist" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.377371 5004 scope.go:117] "RemoveContainer" containerID="de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.393750 5004 scope.go:117] "RemoveContainer" containerID="de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2" Dec 08 18:55:57 crc kubenswrapper[5004]: E1208 18:55:57.394693 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2\": container with ID starting with de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2 not found: ID does not exist" containerID="de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.394731 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2"} err="failed to get container status \"de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2\": rpc error: code = NotFound desc = could not find container \"de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2\": container with ID starting with de6aec397ba9319c920105285dd5ede7bbc0c25ac48a536471a449a4fd2666b2 not found: ID does not exist" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.400349 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx"] Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.408257 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-fvzqx"] Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.412554 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-952d9"] Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.415746 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-952d9"] Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.449262 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.457746 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.767055 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl"] Dec 08 18:55:57 crc kubenswrapper[5004]: W1208 18:55:57.777423 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63027a08_c4d2_4ea1_af7b_9ebc2acb6791.slice/crio-2435f40d923c5caada8f25c4f271b68b772830490abfcf4cc1a069567dd0e34d WatchSource:0}: Error finding container 2435f40d923c5caada8f25c4f271b68b772830490abfcf4cc1a069567dd0e34d: Status 404 returned error can't find the container with id 2435f40d923c5caada8f25c4f271b68b772830490abfcf4cc1a069567dd0e34d Dec 08 18:55:57 crc kubenswrapper[5004]: I1208 18:55:57.798588 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5"] Dec 08 18:55:57 crc kubenswrapper[5004]: W1208 18:55:57.802998 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf40e3f_f30f_4522_805c_ee6c7c9721b3.slice/crio-037ef6d8ce7e73400cba054931bd2f6d6cfa2d6c938d64a8d4e0eda82ab4f088 WatchSource:0}: Error finding container 037ef6d8ce7e73400cba054931bd2f6d6cfa2d6c938d64a8d4e0eda82ab4f088: Status 404 returned error can't find the container with id 037ef6d8ce7e73400cba054931bd2f6d6cfa2d6c938d64a8d4e0eda82ab4f088 Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.369199 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" event={"ID":"3bf40e3f-f30f-4522-805c-ee6c7c9721b3","Type":"ContainerStarted","Data":"20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537"} Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.369544 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" event={"ID":"3bf40e3f-f30f-4522-805c-ee6c7c9721b3","Type":"ContainerStarted","Data":"037ef6d8ce7e73400cba054931bd2f6d6cfa2d6c938d64a8d4e0eda82ab4f088"} Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.369571 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.374347 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" event={"ID":"63027a08-c4d2-4ea1-af7b-9ebc2acb6791","Type":"ContainerStarted","Data":"858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55"} Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.374385 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" event={"ID":"63027a08-c4d2-4ea1-af7b-9ebc2acb6791","Type":"ContainerStarted","Data":"2435f40d923c5caada8f25c4f271b68b772830490abfcf4cc1a069567dd0e34d"} Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.374678 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.415487 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" podStartSLOduration=2.4154703299999998 podStartE2EDuration="2.41547033s" podCreationTimestamp="2025-12-08 18:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:58.39588495 +0000 UTC m=+292.044793258" watchObservedRunningTime="2025-12-08 18:55:58.41547033 +0000 UTC m=+292.064378658" Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.416372 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" podStartSLOduration=2.416364009 podStartE2EDuration="2.416364009s" podCreationTimestamp="2025-12-08 18:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:58.41457093 +0000 UTC m=+292.063479238" watchObservedRunningTime="2025-12-08 18:55:58.416364009 +0000 UTC m=+292.065272317" Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.515734 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.683203 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.747888 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb2fd1b-4381-4505-b84f-fd98014b84cb" path="/var/lib/kubelet/pods/9eb2fd1b-4381-4505-b84f-fd98014b84cb/volumes" Dec 08 18:55:58 crc kubenswrapper[5004]: I1208 18:55:58.748625 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd811bc5-ad8c-47a9-b2b8-b36be4f93a16" path="/var/lib/kubelet/pods/dd811bc5-ad8c-47a9-b2b8-b36be4f93a16/volumes" Dec 08 18:56:01 crc kubenswrapper[5004]: I1208 18:56:01.000554 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:56:01 crc kubenswrapper[5004]: I1208 18:56:01.000933 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:56:01 crc kubenswrapper[5004]: I1208 18:56:01.000993 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:56:01 crc kubenswrapper[5004]: I1208 18:56:01.001640 5004 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aeeaf8c426d441fb729ffc2f1049f785259ca6b7e0ef2b9fe2cbdb0978a2ec65"} pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:56:01 crc kubenswrapper[5004]: I1208 18:56:01.001704 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" containerID="cri-o://aeeaf8c426d441fb729ffc2f1049f785259ca6b7e0ef2b9fe2cbdb0978a2ec65" gracePeriod=600 Dec 08 18:56:01 crc kubenswrapper[5004]: I1208 18:56:01.396302 5004 generic.go:358] "Generic (PLEG): container finished" podID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerID="aeeaf8c426d441fb729ffc2f1049f785259ca6b7e0ef2b9fe2cbdb0978a2ec65" exitCode=0 Dec 08 18:56:01 crc kubenswrapper[5004]: I1208 18:56:01.396470 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerDied","Data":"aeeaf8c426d441fb729ffc2f1049f785259ca6b7e0ef2b9fe2cbdb0978a2ec65"} Dec 08 18:56:01 crc kubenswrapper[5004]: I1208 18:56:01.396947 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerStarted","Data":"d7a8989340f90bfb7d76010c674a653598e32c9027b446c9896f021c5afe48f1"} Dec 08 18:56:06 crc kubenswrapper[5004]: I1208 18:56:06.968937 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-wqg6t_5d3eaa17-c643-4536-88a0-a76854e545ab/openshift-config-operator/0.log" Dec 08 18:56:06 crc kubenswrapper[5004]: I1208 18:56:06.969099 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-wqg6t_5d3eaa17-c643-4536-88a0-a76854e545ab/openshift-config-operator/0.log" Dec 08 18:56:06 crc kubenswrapper[5004]: I1208 18:56:06.992630 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:56:06 crc kubenswrapper[5004]: I1208 18:56:06.992727 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:56:16 crc kubenswrapper[5004]: I1208 18:56:16.597684 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5"] Dec 08 18:56:16 crc kubenswrapper[5004]: I1208 18:56:16.598541 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" podUID="3bf40e3f-f30f-4522-805c-ee6c7c9721b3" containerName="controller-manager" containerID="cri-o://20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537" gracePeriod=30 Dec 08 18:56:16 crc kubenswrapper[5004]: I1208 18:56:16.616611 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl"] Dec 08 18:56:16 crc kubenswrapper[5004]: I1208 18:56:16.617009 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" podUID="63027a08-c4d2-4ea1-af7b-9ebc2acb6791" containerName="route-controller-manager" containerID="cri-o://858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55" gracePeriod=30 Dec 08 18:56:16 crc kubenswrapper[5004]: I1208 18:56:16.997325 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.035742 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.071428 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-khpft"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.072092 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="63027a08-c4d2-4ea1-af7b-9ebc2acb6791" containerName="route-controller-manager" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.072116 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="63027a08-c4d2-4ea1-af7b-9ebc2acb6791" containerName="route-controller-manager" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.072141 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3bf40e3f-f30f-4522-805c-ee6c7c9721b3" containerName="controller-manager" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.072166 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf40e3f-f30f-4522-805c-ee6c7c9721b3" containerName="controller-manager" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.072296 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="3bf40e3f-f30f-4522-805c-ee6c7c9721b3" containerName="controller-manager" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.072321 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="63027a08-c4d2-4ea1-af7b-9ebc2acb6791" containerName="route-controller-manager" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.079282 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-serving-cert\") pod \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.079421 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-config\") pod \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.079451 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-proxy-ca-bundles\") pod \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.079482 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-tmp\") pod \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.079539 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-client-ca\") pod \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.079618 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx29n\" (UniqueName: \"kubernetes.io/projected/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-kube-api-access-jx29n\") pod \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\" (UID: \"3bf40e3f-f30f-4522-805c-ee6c7c9721b3\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.080338 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.081395 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-tmp" (OuterVolumeSpecName: "tmp") pod "3bf40e3f-f30f-4522-805c-ee6c7c9721b3" (UID: "3bf40e3f-f30f-4522-805c-ee6c7c9721b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.081907 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3bf40e3f-f30f-4522-805c-ee6c7c9721b3" (UID: "3bf40e3f-f30f-4522-805c-ee6c7c9721b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.081990 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-config" (OuterVolumeSpecName: "config") pod "3bf40e3f-f30f-4522-805c-ee6c7c9721b3" (UID: "3bf40e3f-f30f-4522-805c-ee6c7c9721b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.082336 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "3bf40e3f-f30f-4522-805c-ee6c7c9721b3" (UID: "3bf40e3f-f30f-4522-805c-ee6c7c9721b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.093045 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-kube-api-access-jx29n" (OuterVolumeSpecName: "kube-api-access-jx29n") pod "3bf40e3f-f30f-4522-805c-ee6c7c9721b3" (UID: "3bf40e3f-f30f-4522-805c-ee6c7c9721b3"). InnerVolumeSpecName "kube-api-access-jx29n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.094143 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-khpft"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.094342 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3bf40e3f-f30f-4522-805c-ee6c7c9721b3" (UID: "3bf40e3f-f30f-4522-805c-ee6c7c9721b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.145119 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.179308 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.179499 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180293 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-client-ca\") pod \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180391 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njq6x\" (UniqueName: \"kubernetes.io/projected/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-kube-api-access-njq6x\") pod \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180440 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-config\") pod \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180487 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-tmp\") pod \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180526 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-serving-cert\") pod \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\" (UID: \"63027a08-c4d2-4ea1-af7b-9ebc2acb6791\") " Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180668 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-config\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180710 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-proxy-ca-bundles\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180737 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c084f2ce-de36-4ab6-8c40-e281eae75367-tmp\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180802 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c084f2ce-de36-4ab6-8c40-e281eae75367-serving-cert\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180834 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ghg\" (UniqueName: \"kubernetes.io/projected/c084f2ce-de36-4ab6-8c40-e281eae75367-kube-api-access-b9ghg\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180879 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-client-ca\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180927 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180942 5004 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180957 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180984 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.180997 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jx29n\" (UniqueName: \"kubernetes.io/projected/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-kube-api-access-jx29n\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.181008 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf40e3f-f30f-4522-805c-ee6c7c9721b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.182212 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-client-ca" (OuterVolumeSpecName: "client-ca") pod "63027a08-c4d2-4ea1-af7b-9ebc2acb6791" (UID: "63027a08-c4d2-4ea1-af7b-9ebc2acb6791"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.183559 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-tmp" (OuterVolumeSpecName: "tmp") pod "63027a08-c4d2-4ea1-af7b-9ebc2acb6791" (UID: "63027a08-c4d2-4ea1-af7b-9ebc2acb6791"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.183755 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-config" (OuterVolumeSpecName: "config") pod "63027a08-c4d2-4ea1-af7b-9ebc2acb6791" (UID: "63027a08-c4d2-4ea1-af7b-9ebc2acb6791"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.194457 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-kube-api-access-njq6x" (OuterVolumeSpecName: "kube-api-access-njq6x") pod "63027a08-c4d2-4ea1-af7b-9ebc2acb6791" (UID: "63027a08-c4d2-4ea1-af7b-9ebc2acb6791"). InnerVolumeSpecName "kube-api-access-njq6x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.197054 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63027a08-c4d2-4ea1-af7b-9ebc2acb6791" (UID: "63027a08-c4d2-4ea1-af7b-9ebc2acb6791"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.282920 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-proxy-ca-bundles\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283160 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c084f2ce-de36-4ab6-8c40-e281eae75367-tmp\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283233 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f7961d-2b67-422f-8266-8e339618cf1e-serving-cert\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283282 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c084f2ce-de36-4ab6-8c40-e281eae75367-serving-cert\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283332 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9ghg\" (UniqueName: \"kubernetes.io/projected/c084f2ce-de36-4ab6-8c40-e281eae75367-kube-api-access-b9ghg\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283419 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/03f7961d-2b67-422f-8266-8e339618cf1e-tmp\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283445 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thm94\" (UniqueName: \"kubernetes.io/projected/03f7961d-2b67-422f-8266-8e339618cf1e-kube-api-access-thm94\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283474 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-client-ca\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283552 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f7961d-2b67-422f-8266-8e339618cf1e-config\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283608 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-config\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283629 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f7961d-2b67-422f-8266-8e339618cf1e-client-ca\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283671 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-njq6x\" (UniqueName: \"kubernetes.io/projected/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-kube-api-access-njq6x\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283683 5004 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283692 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283700 5004 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.283709 5004 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63027a08-c4d2-4ea1-af7b-9ebc2acb6791-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.284383 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-proxy-ca-bundles\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.286495 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-client-ca\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.286792 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c084f2ce-de36-4ab6-8c40-e281eae75367-tmp\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.287883 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c084f2ce-de36-4ab6-8c40-e281eae75367-config\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.292720 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c084f2ce-de36-4ab6-8c40-e281eae75367-serving-cert\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.303713 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9ghg\" (UniqueName: \"kubernetes.io/projected/c084f2ce-de36-4ab6-8c40-e281eae75367-kube-api-access-b9ghg\") pod \"controller-manager-8f4776fc7-khpft\" (UID: \"c084f2ce-de36-4ab6-8c40-e281eae75367\") " pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.384653 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f7961d-2b67-422f-8266-8e339618cf1e-serving-cert\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.384743 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/03f7961d-2b67-422f-8266-8e339618cf1e-tmp\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.384769 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-thm94\" (UniqueName: \"kubernetes.io/projected/03f7961d-2b67-422f-8266-8e339618cf1e-kube-api-access-thm94\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.384820 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f7961d-2b67-422f-8266-8e339618cf1e-config\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.384863 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f7961d-2b67-422f-8266-8e339618cf1e-client-ca\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.385947 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/03f7961d-2b67-422f-8266-8e339618cf1e-tmp\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.386818 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f7961d-2b67-422f-8266-8e339618cf1e-config\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.387532 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f7961d-2b67-422f-8266-8e339618cf1e-client-ca\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.390508 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f7961d-2b67-422f-8266-8e339618cf1e-serving-cert\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.405462 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-thm94\" (UniqueName: \"kubernetes.io/projected/03f7961d-2b67-422f-8266-8e339618cf1e-kube-api-access-thm94\") pod \"route-controller-manager-7fd99595df-8n9qc\" (UID: \"03f7961d-2b67-422f-8266-8e339618cf1e\") " pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.428778 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.497403 5004 generic.go:358] "Generic (PLEG): container finished" podID="3bf40e3f-f30f-4522-805c-ee6c7c9721b3" containerID="20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537" exitCode=0 Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.497451 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" event={"ID":"3bf40e3f-f30f-4522-805c-ee6c7c9721b3","Type":"ContainerDied","Data":"20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537"} Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.497513 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" event={"ID":"3bf40e3f-f30f-4522-805c-ee6c7c9721b3","Type":"ContainerDied","Data":"037ef6d8ce7e73400cba054931bd2f6d6cfa2d6c938d64a8d4e0eda82ab4f088"} Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.497536 5004 scope.go:117] "RemoveContainer" containerID="20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.497538 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.503251 5004 generic.go:358] "Generic (PLEG): container finished" podID="63027a08-c4d2-4ea1-af7b-9ebc2acb6791" containerID="858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55" exitCode=0 Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.503404 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.503449 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" event={"ID":"63027a08-c4d2-4ea1-af7b-9ebc2acb6791","Type":"ContainerDied","Data":"858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55"} Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.503502 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl" event={"ID":"63027a08-c4d2-4ea1-af7b-9ebc2acb6791","Type":"ContainerDied","Data":"2435f40d923c5caada8f25c4f271b68b772830490abfcf4cc1a069567dd0e34d"} Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.516204 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.547183 5004 scope.go:117] "RemoveContainer" containerID="20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537" Dec 08 18:56:17 crc kubenswrapper[5004]: E1208 18:56:17.549483 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537\": container with ID starting with 20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537 not found: ID does not exist" containerID="20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.549560 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537"} err="failed to get container status \"20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537\": rpc error: code = NotFound desc = could not find container \"20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537\": container with ID starting with 20267613c0e11e5b6ff828d17fe1b9834f2e6727d48798920470226fc915a537 not found: ID does not exist" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.549611 5004 scope.go:117] "RemoveContainer" containerID="858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.567819 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.575527 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5647f8bb7d-4htbl"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.603653 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.613098 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6589bb7b8d-l9cj5"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.615108 5004 scope.go:117] "RemoveContainer" containerID="858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55" Dec 08 18:56:17 crc kubenswrapper[5004]: E1208 18:56:17.615977 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55\": container with ID starting with 858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55 not found: ID does not exist" containerID="858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.616017 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55"} err="failed to get container status \"858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55\": rpc error: code = NotFound desc = could not find container \"858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55\": container with ID starting with 858629fea99ed9456a9a735938db0e826548dbe6fe663b3e592fa5dab208bc55 not found: ID does not exist" Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.692487 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f4776fc7-khpft"] Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.710507 5004 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 18:56:17 crc kubenswrapper[5004]: I1208 18:56:17.808387 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc"] Dec 08 18:56:17 crc kubenswrapper[5004]: W1208 18:56:17.818317 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03f7961d_2b67_422f_8266_8e339618cf1e.slice/crio-8d489681f051cfe11381c82917d5f93fb91e8d0882748611b1998b7d8d773e85 WatchSource:0}: Error finding container 8d489681f051cfe11381c82917d5f93fb91e8d0882748611b1998b7d8d773e85: Status 404 returned error can't find the container with id 8d489681f051cfe11381c82917d5f93fb91e8d0882748611b1998b7d8d773e85 Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.511534 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" event={"ID":"c084f2ce-de36-4ab6-8c40-e281eae75367","Type":"ContainerStarted","Data":"e9c80d22786d77867444cc432f4a39dcb7f0e1eb6d6527fb997f5a1cfd6015bb"} Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.511920 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" event={"ID":"c084f2ce-de36-4ab6-8c40-e281eae75367","Type":"ContainerStarted","Data":"a9b00a1f40fb88ce9260430e59aecfa1d726b52c65a4cb6814942aaaf11a9505"} Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.511944 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.514336 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" event={"ID":"03f7961d-2b67-422f-8266-8e339618cf1e","Type":"ContainerStarted","Data":"e691ffe80fbcc0b83b5f1a16875b1af2aa81e43df85174d9859ba9875917b6dd"} Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.514360 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" event={"ID":"03f7961d-2b67-422f-8266-8e339618cf1e","Type":"ContainerStarted","Data":"8d489681f051cfe11381c82917d5f93fb91e8d0882748611b1998b7d8d773e85"} Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.514567 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.517550 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.522450 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.534972 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8f4776fc7-khpft" podStartSLOduration=2.534952049 podStartE2EDuration="2.534952049s" podCreationTimestamp="2025-12-08 18:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:56:18.533298075 +0000 UTC m=+312.182206393" watchObservedRunningTime="2025-12-08 18:56:18.534952049 +0000 UTC m=+312.183860367" Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.577909 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fd99595df-8n9qc" podStartSLOduration=2.577892576 podStartE2EDuration="2.577892576s" podCreationTimestamp="2025-12-08 18:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:56:18.576471048 +0000 UTC m=+312.225379356" watchObservedRunningTime="2025-12-08 18:56:18.577892576 +0000 UTC m=+312.226800884" Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.718253 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf40e3f-f30f-4522-805c-ee6c7c9721b3" path="/var/lib/kubelet/pods/3bf40e3f-f30f-4522-805c-ee6c7c9721b3/volumes" Dec 08 18:56:18 crc kubenswrapper[5004]: I1208 18:56:18.718962 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63027a08-c4d2-4ea1-af7b-9ebc2acb6791" path="/var/lib/kubelet/pods/63027a08-c4d2-4ea1-af7b-9ebc2acb6791/volumes" Dec 08 18:56:43 crc kubenswrapper[5004]: I1208 18:56:43.749598 5004 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.548548 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rg666"] Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.549476 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rg666" podUID="a334e99e-c733-444f-909c-978afa75eea2" containerName="registry-server" containerID="cri-o://b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd" gracePeriod=30 Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.569247 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v879b"] Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.570155 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v879b" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerName="registry-server" containerID="cri-o://b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14" gracePeriod=30 Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.587158 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z7q5s"] Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.587430 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerName="marketplace-operator" containerID="cri-o://0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730" gracePeriod=30 Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.593256 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkpfb"] Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.593661 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fkpfb" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="registry-server" containerID="cri-o://fbc132943e0984809bc2f3c6458619d566b5e121303a51a9a146ca1b61158b66" gracePeriod=30 Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.604421 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h9jcq"] Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.604798 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h9jcq" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerName="registry-server" containerID="cri-o://ef962ce23d0dae5c5a0257d08c61c4fc1554390fdace6b14f325f7c6b7910851" gracePeriod=30 Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.641859 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qjpbl"] Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.649281 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.674792 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qjpbl"] Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.752060 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx7qx\" (UniqueName: \"kubernetes.io/projected/52421bbb-c152-439c-98a9-eea063951c00-kube-api-access-hx7qx\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.752518 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/52421bbb-c152-439c-98a9-eea063951c00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.752705 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52421bbb-c152-439c-98a9-eea063951c00-tmp\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.752761 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52421bbb-c152-439c-98a9-eea063951c00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.853880 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52421bbb-c152-439c-98a9-eea063951c00-tmp\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.853986 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52421bbb-c152-439c-98a9-eea063951c00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.854031 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hx7qx\" (UniqueName: \"kubernetes.io/projected/52421bbb-c152-439c-98a9-eea063951c00-kube-api-access-hx7qx\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.854064 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/52421bbb-c152-439c-98a9-eea063951c00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.859547 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52421bbb-c152-439c-98a9-eea063951c00-tmp\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.860221 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52421bbb-c152-439c-98a9-eea063951c00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.878695 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/52421bbb-c152-439c-98a9-eea063951c00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:10 crc kubenswrapper[5004]: I1208 18:57:10.893488 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx7qx\" (UniqueName: \"kubernetes.io/projected/52421bbb-c152-439c-98a9-eea063951c00-kube-api-access-hx7qx\") pod \"marketplace-operator-547dbd544d-qjpbl\" (UID: \"52421bbb-c152-439c-98a9-eea063951c00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.035184 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.044060 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.046624 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.061442 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-catalog-content\") pod \"a334e99e-c733-444f-909c-978afa75eea2\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.061582 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58b8eee8-00f8-4078-a0d1-3805d336771f-tmp\") pod \"58b8eee8-00f8-4078-a0d1-3805d336771f\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.061607 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txws6\" (UniqueName: \"kubernetes.io/projected/a334e99e-c733-444f-909c-978afa75eea2-kube-api-access-txws6\") pod \"a334e99e-c733-444f-909c-978afa75eea2\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.080868 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-trusted-ca\") pod \"58b8eee8-00f8-4078-a0d1-3805d336771f\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.108519 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b8eee8-00f8-4078-a0d1-3805d336771f-tmp" (OuterVolumeSpecName: "tmp") pod "58b8eee8-00f8-4078-a0d1-3805d336771f" (UID: "58b8eee8-00f8-4078-a0d1-3805d336771f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.080917 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx288\" (UniqueName: \"kubernetes.io/projected/58b8eee8-00f8-4078-a0d1-3805d336771f-kube-api-access-kx288\") pod \"58b8eee8-00f8-4078-a0d1-3805d336771f\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.108644 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-utilities\") pod \"a334e99e-c733-444f-909c-978afa75eea2\" (UID: \"a334e99e-c733-444f-909c-978afa75eea2\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.108669 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-operator-metrics\") pod \"58b8eee8-00f8-4078-a0d1-3805d336771f\" (UID: \"58b8eee8-00f8-4078-a0d1-3805d336771f\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.109185 5004 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58b8eee8-00f8-4078-a0d1-3805d336771f-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.117299 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "58b8eee8-00f8-4078-a0d1-3805d336771f" (UID: "58b8eee8-00f8-4078-a0d1-3805d336771f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.126271 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-utilities" (OuterVolumeSpecName: "utilities") pod "a334e99e-c733-444f-909c-978afa75eea2" (UID: "a334e99e-c733-444f-909c-978afa75eea2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.135233 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a334e99e-c733-444f-909c-978afa75eea2-kube-api-access-txws6" (OuterVolumeSpecName: "kube-api-access-txws6") pod "a334e99e-c733-444f-909c-978afa75eea2" (UID: "a334e99e-c733-444f-909c-978afa75eea2"). InnerVolumeSpecName "kube-api-access-txws6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.139120 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b8eee8-00f8-4078-a0d1-3805d336771f-kube-api-access-kx288" (OuterVolumeSpecName: "kube-api-access-kx288") pod "58b8eee8-00f8-4078-a0d1-3805d336771f" (UID: "58b8eee8-00f8-4078-a0d1-3805d336771f"). InnerVolumeSpecName "kube-api-access-kx288". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.144462 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a334e99e-c733-444f-909c-978afa75eea2" (UID: "a334e99e-c733-444f-909c-978afa75eea2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.146536 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "58b8eee8-00f8-4078-a0d1-3805d336771f" (UID: "58b8eee8-00f8-4078-a0d1-3805d336771f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.177518 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v879b" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.209925 5004 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.209956 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kx288\" (UniqueName: \"kubernetes.io/projected/58b8eee8-00f8-4078-a0d1-3805d336771f-kube-api-access-kx288\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.209967 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.210035 5004 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58b8eee8-00f8-4078-a0d1-3805d336771f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.210048 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a334e99e-c733-444f-909c-978afa75eea2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.210058 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-txws6\" (UniqueName: \"kubernetes.io/projected/a334e99e-c733-444f-909c-978afa75eea2-kube-api-access-txws6\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.218963 5004 generic.go:358] "Generic (PLEG): container finished" podID="a334e99e-c733-444f-909c-978afa75eea2" containerID="b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd" exitCode=0 Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.219200 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg666" event={"ID":"a334e99e-c733-444f-909c-978afa75eea2","Type":"ContainerDied","Data":"b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd"} Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.219248 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg666" event={"ID":"a334e99e-c733-444f-909c-978afa75eea2","Type":"ContainerDied","Data":"61516a3fc0ea5c9b0195a2194672d6ec8a8bf59f9441548cbd5ed7396f5a6381"} Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.219275 5004 scope.go:117] "RemoveContainer" containerID="b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.219374 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg666" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.235173 5004 generic.go:358] "Generic (PLEG): container finished" podID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerID="0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730" exitCode=0 Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.236459 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.236913 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" event={"ID":"58b8eee8-00f8-4078-a0d1-3805d336771f","Type":"ContainerDied","Data":"0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730"} Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.237267 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z7q5s" event={"ID":"58b8eee8-00f8-4078-a0d1-3805d336771f","Type":"ContainerDied","Data":"6dcb41bff652e1428dc21b5dd6d4372d275efeb7ae815d1ee5b98184a3d2f80a"} Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.289575 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rg666"] Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.292643 5004 generic.go:358] "Generic (PLEG): container finished" podID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerID="b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14" exitCode=0 Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.292717 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v879b" event={"ID":"aab8b6c5-e160-4589-b8d8-34647c504c26","Type":"ContainerDied","Data":"b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14"} Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.292745 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v879b" event={"ID":"aab8b6c5-e160-4589-b8d8-34647c504c26","Type":"ContainerDied","Data":"5ada913b41cec63c2cc080586519f21e385c4f2f123fa4c1c96fdc680db2fd76"} Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.292864 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v879b" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.306581 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.311670 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l28mq\" (UniqueName: \"kubernetes.io/projected/aab8b6c5-e160-4589-b8d8-34647c504c26-kube-api-access-l28mq\") pod \"aab8b6c5-e160-4589-b8d8-34647c504c26\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.315606 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rg666"] Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.323127 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-catalog-content\") pod \"aab8b6c5-e160-4589-b8d8-34647c504c26\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.324772 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-utilities\") pod \"aab8b6c5-e160-4589-b8d8-34647c504c26\" (UID: \"aab8b6c5-e160-4589-b8d8-34647c504c26\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.339311 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-utilities" (OuterVolumeSpecName: "utilities") pod "aab8b6c5-e160-4589-b8d8-34647c504c26" (UID: "aab8b6c5-e160-4589-b8d8-34647c504c26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.339473 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab8b6c5-e160-4589-b8d8-34647c504c26-kube-api-access-l28mq" (OuterVolumeSpecName: "kube-api-access-l28mq") pod "aab8b6c5-e160-4589-b8d8-34647c504c26" (UID: "aab8b6c5-e160-4589-b8d8-34647c504c26"). InnerVolumeSpecName "kube-api-access-l28mq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.347347 5004 generic.go:358] "Generic (PLEG): container finished" podID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerID="ef962ce23d0dae5c5a0257d08c61c4fc1554390fdace6b14f325f7c6b7910851" exitCode=0 Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.347533 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jcq" event={"ID":"0196edda-a1e0-4e11-b84d-15988bdf3507","Type":"ContainerDied","Data":"ef962ce23d0dae5c5a0257d08c61c4fc1554390fdace6b14f325f7c6b7910851"} Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.352140 5004 generic.go:358] "Generic (PLEG): container finished" podID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerID="fbc132943e0984809bc2f3c6458619d566b5e121303a51a9a146ca1b61158b66" exitCode=0 Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.352244 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkpfb" event={"ID":"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5","Type":"ContainerDied","Data":"fbc132943e0984809bc2f3c6458619d566b5e121303a51a9a146ca1b61158b66"} Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.375438 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z7q5s"] Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.376159 5004 scope.go:117] "RemoveContainer" containerID="b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.386148 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z7q5s"] Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.398614 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aab8b6c5-e160-4589-b8d8-34647c504c26" (UID: "aab8b6c5-e160-4589-b8d8-34647c504c26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.413820 5004 scope.go:117] "RemoveContainer" containerID="31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.442162 5004 scope.go:117] "RemoveContainer" containerID="b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd" Dec 08 18:57:11 crc kubenswrapper[5004]: E1208 18:57:11.442751 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd\": container with ID starting with b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd not found: ID does not exist" containerID="b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.442797 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd"} err="failed to get container status \"b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd\": rpc error: code = NotFound desc = could not find container \"b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd\": container with ID starting with b062fe427bfd699137707ba86e06e3a79a8f7d22f2389fa97c5e7a22cb2582dd not found: ID does not exist" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.442824 5004 scope.go:117] "RemoveContainer" containerID="b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff" Dec 08 18:57:11 crc kubenswrapper[5004]: E1208 18:57:11.443196 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff\": container with ID starting with b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff not found: ID does not exist" containerID="b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.443225 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff"} err="failed to get container status \"b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff\": rpc error: code = NotFound desc = could not find container \"b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff\": container with ID starting with b5b21c493ad31e318c453b6f4889bf3b03cd7bd0cfe342673bd4891c86d67eff not found: ID does not exist" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.443242 5004 scope.go:117] "RemoveContainer" containerID="31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571" Dec 08 18:57:11 crc kubenswrapper[5004]: E1208 18:57:11.443509 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571\": container with ID starting with 31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571 not found: ID does not exist" containerID="31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.443530 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571"} err="failed to get container status \"31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571\": rpc error: code = NotFound desc = could not find container \"31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571\": container with ID starting with 31e401ed147cbfbbd56cab9d0be9a40271f957db032836895763841218cfb571 not found: ID does not exist" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.443546 5004 scope.go:117] "RemoveContainer" containerID="0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.457067 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-catalog-content\") pod \"0196edda-a1e0-4e11-b84d-15988bdf3507\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.457181 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-utilities\") pod \"0196edda-a1e0-4e11-b84d-15988bdf3507\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.457230 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhs2x\" (UniqueName: \"kubernetes.io/projected/0196edda-a1e0-4e11-b84d-15988bdf3507-kube-api-access-nhs2x\") pod \"0196edda-a1e0-4e11-b84d-15988bdf3507\" (UID: \"0196edda-a1e0-4e11-b84d-15988bdf3507\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.457561 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.457581 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l28mq\" (UniqueName: \"kubernetes.io/projected/aab8b6c5-e160-4589-b8d8-34647c504c26-kube-api-access-l28mq\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.457594 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab8b6c5-e160-4589-b8d8-34647c504c26-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.458371 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-utilities" (OuterVolumeSpecName: "utilities") pod "0196edda-a1e0-4e11-b84d-15988bdf3507" (UID: "0196edda-a1e0-4e11-b84d-15988bdf3507"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.461697 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qjpbl"] Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.462972 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0196edda-a1e0-4e11-b84d-15988bdf3507-kube-api-access-nhs2x" (OuterVolumeSpecName: "kube-api-access-nhs2x") pod "0196edda-a1e0-4e11-b84d-15988bdf3507" (UID: "0196edda-a1e0-4e11-b84d-15988bdf3507"). InnerVolumeSpecName "kube-api-access-nhs2x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.464623 5004 scope.go:117] "RemoveContainer" containerID="0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730" Dec 08 18:57:11 crc kubenswrapper[5004]: E1208 18:57:11.467169 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730\": container with ID starting with 0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730 not found: ID does not exist" containerID="0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.473196 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730"} err="failed to get container status \"0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730\": rpc error: code = NotFound desc = could not find container \"0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730\": container with ID starting with 0c499ee0d0f429c8925ac4602c56939f4040af8c7c0355e80eed67a891794730 not found: ID does not exist" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.473255 5004 scope.go:117] "RemoveContainer" containerID="b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.509338 5004 scope.go:117] "RemoveContainer" containerID="fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.536824 5004 scope.go:117] "RemoveContainer" containerID="43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.555650 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.559774 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.559813 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhs2x\" (UniqueName: \"kubernetes.io/projected/0196edda-a1e0-4e11-b84d-15988bdf3507-kube-api-access-nhs2x\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.572829 5004 scope.go:117] "RemoveContainer" containerID="b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14" Dec 08 18:57:11 crc kubenswrapper[5004]: E1208 18:57:11.574165 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14\": container with ID starting with b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14 not found: ID does not exist" containerID="b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.574204 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14"} err="failed to get container status \"b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14\": rpc error: code = NotFound desc = could not find container \"b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14\": container with ID starting with b286b4309cd0a7a905979e105c3be7136606d3fd4f8b255797c2acbd41316b14 not found: ID does not exist" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.574232 5004 scope.go:117] "RemoveContainer" containerID="fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2" Dec 08 18:57:11 crc kubenswrapper[5004]: E1208 18:57:11.574865 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2\": container with ID starting with fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2 not found: ID does not exist" containerID="fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.574888 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2"} err="failed to get container status \"fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2\": rpc error: code = NotFound desc = could not find container \"fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2\": container with ID starting with fad13ac639eb038b13d68d7d2cf88a028b23e801150976c2414392d9e53414c2 not found: ID does not exist" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.574904 5004 scope.go:117] "RemoveContainer" containerID="43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f" Dec 08 18:57:11 crc kubenswrapper[5004]: E1208 18:57:11.577080 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f\": container with ID starting with 43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f not found: ID does not exist" containerID="43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.577224 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f"} err="failed to get container status \"43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f\": rpc error: code = NotFound desc = could not find container \"43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f\": container with ID starting with 43953e78e431c2b8a5653c9577e4abf46184368c05096183a82ac25ef5e0688f not found: ID does not exist" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.577402 5004 scope.go:117] "RemoveContainer" containerID="ef962ce23d0dae5c5a0257d08c61c4fc1554390fdace6b14f325f7c6b7910851" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.630541 5004 scope.go:117] "RemoveContainer" containerID="0f378cc54c0b4e311d437fddf4e6103425635ed11a5f7f6a821741831915e028" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.658674 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v879b"] Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.661131 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-utilities\") pod \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.661272 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czptg\" (UniqueName: \"kubernetes.io/projected/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-kube-api-access-czptg\") pod \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.662054 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-catalog-content\") pod \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\" (UID: \"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5\") " Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.666321 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v879b"] Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.668422 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-utilities" (OuterVolumeSpecName: "utilities") pod "a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" (UID: "a3abe155-9f6c-4a9e-aded-f9c7857f7bf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.669126 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0196edda-a1e0-4e11-b84d-15988bdf3507" (UID: "0196edda-a1e0-4e11-b84d-15988bdf3507"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.672346 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-kube-api-access-czptg" (OuterVolumeSpecName: "kube-api-access-czptg") pod "a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" (UID: "a3abe155-9f6c-4a9e-aded-f9c7857f7bf5"). InnerVolumeSpecName "kube-api-access-czptg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.680063 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" (UID: "a3abe155-9f6c-4a9e-aded-f9c7857f7bf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.681229 5004 scope.go:117] "RemoveContainer" containerID="723863c18e134532db94e3334c1c79368c0a190350b349290c79a311890dc2e8" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.763395 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-czptg\" (UniqueName: \"kubernetes.io/projected/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-kube-api-access-czptg\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.763438 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0196edda-a1e0-4e11-b84d-15988bdf3507-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.763451 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:11 crc kubenswrapper[5004]: I1208 18:57:11.763462 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.360648 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" event={"ID":"52421bbb-c152-439c-98a9-eea063951c00","Type":"ContainerStarted","Data":"3bbe6c651ea9d707b1ccc7eb189a7d7ffa7becca928062c189ed2ecfe5beb632"} Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.360692 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" event={"ID":"52421bbb-c152-439c-98a9-eea063951c00","Type":"ContainerStarted","Data":"ba88384706b09c295b281d46bb12a486438fb20f0d262f7177529352bb76bde5"} Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.361141 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.363391 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jcq" event={"ID":"0196edda-a1e0-4e11-b84d-15988bdf3507","Type":"ContainerDied","Data":"2d51286e00543c1cadbeec01393fb7a84a118f47ad65372f0509a6de51b8d665"} Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.363529 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jcq" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.367609 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkpfb" event={"ID":"a3abe155-9f6c-4a9e-aded-f9c7857f7bf5","Type":"ContainerDied","Data":"dcfdd7c9fc694a94d908cfe78c726e40d3197927b79e1c1636c370e69010bf26"} Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.367685 5004 scope.go:117] "RemoveContainer" containerID="fbc132943e0984809bc2f3c6458619d566b5e121303a51a9a146ca1b61158b66" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.367804 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkpfb" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.371359 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.393715 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-qjpbl" podStartSLOduration=2.393698512 podStartE2EDuration="2.393698512s" podCreationTimestamp="2025-12-08 18:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:57:12.387603865 +0000 UTC m=+366.036512173" watchObservedRunningTime="2025-12-08 18:57:12.393698512 +0000 UTC m=+366.042606830" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.398916 5004 scope.go:117] "RemoveContainer" containerID="b979d09f7b710d35c90943870fef962ff466efc2b234ec086b69817bfc0525e4" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.431320 5004 scope.go:117] "RemoveContainer" containerID="ed5bae79999b728e5a0375c22a0e30fbc17318f3a89906afc44c18a5b31f208c" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.447381 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h9jcq"] Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.475845 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h9jcq"] Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.477225 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkpfb"] Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.481195 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkpfb"] Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.718264 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" path="/var/lib/kubelet/pods/0196edda-a1e0-4e11-b84d-15988bdf3507/volumes" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.719547 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" path="/var/lib/kubelet/pods/58b8eee8-00f8-4078-a0d1-3805d336771f/volumes" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.720260 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a334e99e-c733-444f-909c-978afa75eea2" path="/var/lib/kubelet/pods/a334e99e-c733-444f-909c-978afa75eea2/volumes" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.721439 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" path="/var/lib/kubelet/pods/a3abe155-9f6c-4a9e-aded-f9c7857f7bf5/volumes" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.722104 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" path="/var/lib/kubelet/pods/aab8b6c5-e160-4589-b8d8-34647c504c26/volumes" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.963184 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7rnm"] Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.963909 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.963943 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.963956 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerName="extract-content" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.963963 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerName="extract-content" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.963980 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a334e99e-c733-444f-909c-978afa75eea2" containerName="extract-content" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.963988 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a334e99e-c733-444f-909c-978afa75eea2" containerName="extract-content" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964005 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerName="extract-utilities" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964012 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerName="extract-utilities" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964021 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerName="extract-utilities" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964028 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerName="extract-utilities" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964038 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="extract-content" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964045 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="extract-content" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964052 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a334e99e-c733-444f-909c-978afa75eea2" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964060 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a334e99e-c733-444f-909c-978afa75eea2" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964081 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerName="marketplace-operator" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964089 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerName="marketplace-operator" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964096 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerName="extract-content" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964103 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerName="extract-content" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964117 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a334e99e-c733-444f-909c-978afa75eea2" containerName="extract-utilities" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964124 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a334e99e-c733-444f-909c-978afa75eea2" containerName="extract-utilities" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964136 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="extract-utilities" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964143 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="extract-utilities" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964152 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964159 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964167 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964174 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964312 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="0196edda-a1e0-4e11-b84d-15988bdf3507" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964325 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="aab8b6c5-e160-4589-b8d8-34647c504c26" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964337 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="a334e99e-c733-444f-909c-978afa75eea2" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964351 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3abe155-9f6c-4a9e-aded-f9c7857f7bf5" containerName="registry-server" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.964361 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="58b8eee8-00f8-4078-a0d1-3805d336771f" containerName="marketplace-operator" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.973219 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.974244 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7rnm"] Dec 08 18:57:12 crc kubenswrapper[5004]: I1208 18:57:12.975494 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.082166 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lps8k\" (UniqueName: \"kubernetes.io/projected/67539ce7-334b-48f5-bfc0-bc60dea0bb18-kube-api-access-lps8k\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.082369 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67539ce7-334b-48f5-bfc0-bc60dea0bb18-utilities\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.082476 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67539ce7-334b-48f5-bfc0-bc60dea0bb18-catalog-content\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.184127 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67539ce7-334b-48f5-bfc0-bc60dea0bb18-utilities\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.184188 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67539ce7-334b-48f5-bfc0-bc60dea0bb18-catalog-content\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.184245 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lps8k\" (UniqueName: \"kubernetes.io/projected/67539ce7-334b-48f5-bfc0-bc60dea0bb18-kube-api-access-lps8k\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.184711 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67539ce7-334b-48f5-bfc0-bc60dea0bb18-utilities\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.184842 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67539ce7-334b-48f5-bfc0-bc60dea0bb18-catalog-content\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.204961 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lps8k\" (UniqueName: \"kubernetes.io/projected/67539ce7-334b-48f5-bfc0-bc60dea0bb18-kube-api-access-lps8k\") pod \"redhat-operators-l7rnm\" (UID: \"67539ce7-334b-48f5-bfc0-bc60dea0bb18\") " pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.303117 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:13 crc kubenswrapper[5004]: I1208 18:57:13.487411 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7rnm"] Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.382771 5004 generic.go:358] "Generic (PLEG): container finished" podID="67539ce7-334b-48f5-bfc0-bc60dea0bb18" containerID="19e3b1514a603971c3a565152d49873e3c33a6e055952744a7a3a5f968200363" exitCode=0 Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.382829 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7rnm" event={"ID":"67539ce7-334b-48f5-bfc0-bc60dea0bb18","Type":"ContainerDied","Data":"19e3b1514a603971c3a565152d49873e3c33a6e055952744a7a3a5f968200363"} Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.382885 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7rnm" event={"ID":"67539ce7-334b-48f5-bfc0-bc60dea0bb18","Type":"ContainerStarted","Data":"4a391eba9039b6807fe3e2b1f0c1ca8dd25015cf6531015ec8e8252d9aecf11f"} Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.764008 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vjv2b"] Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.770638 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.773134 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.777089 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjv2b"] Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.902503 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cslj\" (UniqueName: \"kubernetes.io/projected/1d13d847-e121-427c-a98b-3e15bbd621f1-kube-api-access-2cslj\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.902686 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d13d847-e121-427c-a98b-3e15bbd621f1-catalog-content\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:14 crc kubenswrapper[5004]: I1208 18:57:14.902788 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d13d847-e121-427c-a98b-3e15bbd621f1-utilities\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.004543 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2cslj\" (UniqueName: \"kubernetes.io/projected/1d13d847-e121-427c-a98b-3e15bbd621f1-kube-api-access-2cslj\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.004654 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d13d847-e121-427c-a98b-3e15bbd621f1-catalog-content\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.004698 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d13d847-e121-427c-a98b-3e15bbd621f1-utilities\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.005187 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d13d847-e121-427c-a98b-3e15bbd621f1-catalog-content\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.005387 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d13d847-e121-427c-a98b-3e15bbd621f1-utilities\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.029471 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cslj\" (UniqueName: \"kubernetes.io/projected/1d13d847-e121-427c-a98b-3e15bbd621f1-kube-api-access-2cslj\") pod \"certified-operators-vjv2b\" (UID: \"1d13d847-e121-427c-a98b-3e15bbd621f1\") " pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.087673 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.365563 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6s4gp"] Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.380659 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6s4gp"] Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.380839 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.384582 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.393645 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7rnm" event={"ID":"67539ce7-334b-48f5-bfc0-bc60dea0bb18","Type":"ContainerStarted","Data":"0764dbf77a37d66813482d9fa80c9d4e0eea79edf5c96e7596a2dd2b6853b599"} Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.479816 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjv2b"] Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.510899 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a160c10f-bd66-4b3e-821e-ebb170972dcb-catalog-content\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.510965 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwfds\" (UniqueName: \"kubernetes.io/projected/a160c10f-bd66-4b3e-821e-ebb170972dcb-kube-api-access-qwfds\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.511046 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a160c10f-bd66-4b3e-821e-ebb170972dcb-utilities\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.612447 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a160c10f-bd66-4b3e-821e-ebb170972dcb-catalog-content\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.612776 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qwfds\" (UniqueName: \"kubernetes.io/projected/a160c10f-bd66-4b3e-821e-ebb170972dcb-kube-api-access-qwfds\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.613103 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a160c10f-bd66-4b3e-821e-ebb170972dcb-utilities\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.613110 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a160c10f-bd66-4b3e-821e-ebb170972dcb-catalog-content\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.613356 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a160c10f-bd66-4b3e-821e-ebb170972dcb-utilities\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.634344 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwfds\" (UniqueName: \"kubernetes.io/projected/a160c10f-bd66-4b3e-821e-ebb170972dcb-kube-api-access-qwfds\") pod \"community-operators-6s4gp\" (UID: \"a160c10f-bd66-4b3e-821e-ebb170972dcb\") " pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.701479 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:15 crc kubenswrapper[5004]: I1208 18:57:15.908822 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6s4gp"] Dec 08 18:57:15 crc kubenswrapper[5004]: W1208 18:57:15.917956 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda160c10f_bd66_4b3e_821e_ebb170972dcb.slice/crio-e0e53166f8e4a3b380a65bbddb198afc3f013f925e5ef669238a7da27c113abf WatchSource:0}: Error finding container e0e53166f8e4a3b380a65bbddb198afc3f013f925e5ef669238a7da27c113abf: Status 404 returned error can't find the container with id e0e53166f8e4a3b380a65bbddb198afc3f013f925e5ef669238a7da27c113abf Dec 08 18:57:16 crc kubenswrapper[5004]: I1208 18:57:16.401044 5004 generic.go:358] "Generic (PLEG): container finished" podID="a160c10f-bd66-4b3e-821e-ebb170972dcb" containerID="46a8bbecf8230b698ee73c504b458e4f6d66637ef3fe44924da4f0f1e9cb8e49" exitCode=0 Dec 08 18:57:16 crc kubenswrapper[5004]: I1208 18:57:16.401429 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s4gp" event={"ID":"a160c10f-bd66-4b3e-821e-ebb170972dcb","Type":"ContainerDied","Data":"46a8bbecf8230b698ee73c504b458e4f6d66637ef3fe44924da4f0f1e9cb8e49"} Dec 08 18:57:16 crc kubenswrapper[5004]: I1208 18:57:16.401458 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s4gp" event={"ID":"a160c10f-bd66-4b3e-821e-ebb170972dcb","Type":"ContainerStarted","Data":"e0e53166f8e4a3b380a65bbddb198afc3f013f925e5ef669238a7da27c113abf"} Dec 08 18:57:16 crc kubenswrapper[5004]: I1208 18:57:16.409739 5004 generic.go:358] "Generic (PLEG): container finished" podID="1d13d847-e121-427c-a98b-3e15bbd621f1" containerID="1f9acb62b71af5aab358a65ba9b416da61ce1ba7519abc356446b2a94869c004" exitCode=0 Dec 08 18:57:16 crc kubenswrapper[5004]: I1208 18:57:16.409892 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjv2b" event={"ID":"1d13d847-e121-427c-a98b-3e15bbd621f1","Type":"ContainerDied","Data":"1f9acb62b71af5aab358a65ba9b416da61ce1ba7519abc356446b2a94869c004"} Dec 08 18:57:16 crc kubenswrapper[5004]: I1208 18:57:16.409925 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjv2b" event={"ID":"1d13d847-e121-427c-a98b-3e15bbd621f1","Type":"ContainerStarted","Data":"c52da7e142d7fdf66297dff6b80780d0cf7e57e5b221c09af15cedfb13019e25"} Dec 08 18:57:16 crc kubenswrapper[5004]: I1208 18:57:16.418006 5004 generic.go:358] "Generic (PLEG): container finished" podID="67539ce7-334b-48f5-bfc0-bc60dea0bb18" containerID="0764dbf77a37d66813482d9fa80c9d4e0eea79edf5c96e7596a2dd2b6853b599" exitCode=0 Dec 08 18:57:16 crc kubenswrapper[5004]: I1208 18:57:16.418050 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7rnm" event={"ID":"67539ce7-334b-48f5-bfc0-bc60dea0bb18","Type":"ContainerDied","Data":"0764dbf77a37d66813482d9fa80c9d4e0eea79edf5c96e7596a2dd2b6853b599"} Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.174951 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sjp9l"] Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.183577 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.188153 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.200759 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjp9l"] Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.344617 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2r8g\" (UniqueName: \"kubernetes.io/projected/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-kube-api-access-r2r8g\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.345030 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-catalog-content\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.345185 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-utilities\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.424084 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s4gp" event={"ID":"a160c10f-bd66-4b3e-821e-ebb170972dcb","Type":"ContainerStarted","Data":"da14a1680d82883791c9da849c20387b392ae32f0c99adc403d498cc3f470097"} Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.425931 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjv2b" event={"ID":"1d13d847-e121-427c-a98b-3e15bbd621f1","Type":"ContainerStarted","Data":"b7a39e89263d48b766961b6847d615ea27fdf118474eb2bf08fb7eaa9c37c2d1"} Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.428532 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7rnm" event={"ID":"67539ce7-334b-48f5-bfc0-bc60dea0bb18","Type":"ContainerStarted","Data":"2f923c00d4f4324c7cfd27f990dc8192c056c801a24199b4250051ddfda68c93"} Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.446719 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-utilities\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.446836 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r2r8g\" (UniqueName: \"kubernetes.io/projected/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-kube-api-access-r2r8g\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.446871 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-catalog-content\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.447682 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-utilities\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.447692 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-catalog-content\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.482234 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2r8g\" (UniqueName: \"kubernetes.io/projected/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-kube-api-access-r2r8g\") pod \"redhat-marketplace-sjp9l\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.498891 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.519335 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7rnm" podStartSLOduration=4.942357911 podStartE2EDuration="5.519315841s" podCreationTimestamp="2025-12-08 18:57:12 +0000 UTC" firstStartedPulling="2025-12-08 18:57:14.383844481 +0000 UTC m=+368.032752789" lastFinishedPulling="2025-12-08 18:57:14.960802411 +0000 UTC m=+368.609710719" observedRunningTime="2025-12-08 18:57:17.517465502 +0000 UTC m=+371.166373820" watchObservedRunningTime="2025-12-08 18:57:17.519315841 +0000 UTC m=+371.168224149" Dec 08 18:57:17 crc kubenswrapper[5004]: I1208 18:57:17.972122 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjp9l"] Dec 08 18:57:18 crc kubenswrapper[5004]: I1208 18:57:18.435004 5004 generic.go:358] "Generic (PLEG): container finished" podID="a160c10f-bd66-4b3e-821e-ebb170972dcb" containerID="da14a1680d82883791c9da849c20387b392ae32f0c99adc403d498cc3f470097" exitCode=0 Dec 08 18:57:18 crc kubenswrapper[5004]: I1208 18:57:18.435215 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s4gp" event={"ID":"a160c10f-bd66-4b3e-821e-ebb170972dcb","Type":"ContainerDied","Data":"da14a1680d82883791c9da849c20387b392ae32f0c99adc403d498cc3f470097"} Dec 08 18:57:18 crc kubenswrapper[5004]: I1208 18:57:18.436744 5004 generic.go:358] "Generic (PLEG): container finished" podID="1d13d847-e121-427c-a98b-3e15bbd621f1" containerID="b7a39e89263d48b766961b6847d615ea27fdf118474eb2bf08fb7eaa9c37c2d1" exitCode=0 Dec 08 18:57:18 crc kubenswrapper[5004]: I1208 18:57:18.436839 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjv2b" event={"ID":"1d13d847-e121-427c-a98b-3e15bbd621f1","Type":"ContainerDied","Data":"b7a39e89263d48b766961b6847d615ea27fdf118474eb2bf08fb7eaa9c37c2d1"} Dec 08 18:57:18 crc kubenswrapper[5004]: I1208 18:57:18.438681 5004 generic.go:358] "Generic (PLEG): container finished" podID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerID="25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a" exitCode=0 Dec 08 18:57:18 crc kubenswrapper[5004]: I1208 18:57:18.439292 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjp9l" event={"ID":"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f","Type":"ContainerDied","Data":"25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a"} Dec 08 18:57:18 crc kubenswrapper[5004]: I1208 18:57:18.439419 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjp9l" event={"ID":"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f","Type":"ContainerStarted","Data":"6d9ffce6c346bcc7015f507602ca74d4c4493d2f9773b2ce22d5090092f63c57"} Dec 08 18:57:19 crc kubenswrapper[5004]: I1208 18:57:19.445650 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s4gp" event={"ID":"a160c10f-bd66-4b3e-821e-ebb170972dcb","Type":"ContainerStarted","Data":"1a343d1dc7dec87a62e9285c8468d469b979f00511d12d614ce5a8e56c4ed55d"} Dec 08 18:57:19 crc kubenswrapper[5004]: I1208 18:57:19.448903 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjv2b" event={"ID":"1d13d847-e121-427c-a98b-3e15bbd621f1","Type":"ContainerStarted","Data":"9d7303891dcc7939e75887550f5274bb33d0ddf2995974fd8a9d8ca41e49d00a"} Dec 08 18:57:19 crc kubenswrapper[5004]: I1208 18:57:19.450379 5004 generic.go:358] "Generic (PLEG): container finished" podID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerID="926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862" exitCode=0 Dec 08 18:57:19 crc kubenswrapper[5004]: I1208 18:57:19.450488 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjp9l" event={"ID":"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f","Type":"ContainerDied","Data":"926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862"} Dec 08 18:57:19 crc kubenswrapper[5004]: I1208 18:57:19.479117 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6s4gp" podStartSLOduration=3.818814133 podStartE2EDuration="4.479099035s" podCreationTimestamp="2025-12-08 18:57:15 +0000 UTC" firstStartedPulling="2025-12-08 18:57:16.402311748 +0000 UTC m=+370.051220056" lastFinishedPulling="2025-12-08 18:57:17.06259665 +0000 UTC m=+370.711504958" observedRunningTime="2025-12-08 18:57:19.47709484 +0000 UTC m=+373.126003148" watchObservedRunningTime="2025-12-08 18:57:19.479099035 +0000 UTC m=+373.128007353" Dec 08 18:57:19 crc kubenswrapper[5004]: I1208 18:57:19.550984 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vjv2b" podStartSLOduration=4.800117295 podStartE2EDuration="5.550959685s" podCreationTimestamp="2025-12-08 18:57:14 +0000 UTC" firstStartedPulling="2025-12-08 18:57:16.410986149 +0000 UTC m=+370.059894457" lastFinishedPulling="2025-12-08 18:57:17.161828539 +0000 UTC m=+370.810736847" observedRunningTime="2025-12-08 18:57:19.540176656 +0000 UTC m=+373.189084984" watchObservedRunningTime="2025-12-08 18:57:19.550959685 +0000 UTC m=+373.199867993" Dec 08 18:57:20 crc kubenswrapper[5004]: I1208 18:57:20.459045 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjp9l" event={"ID":"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f","Type":"ContainerStarted","Data":"69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd"} Dec 08 18:57:20 crc kubenswrapper[5004]: I1208 18:57:20.479494 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sjp9l" podStartSLOduration=2.942711461 podStartE2EDuration="3.479480657s" podCreationTimestamp="2025-12-08 18:57:17 +0000 UTC" firstStartedPulling="2025-12-08 18:57:18.439712389 +0000 UTC m=+372.088620697" lastFinishedPulling="2025-12-08 18:57:18.976481585 +0000 UTC m=+372.625389893" observedRunningTime="2025-12-08 18:57:20.475975583 +0000 UTC m=+374.124883891" watchObservedRunningTime="2025-12-08 18:57:20.479480657 +0000 UTC m=+374.128388965" Dec 08 18:57:23 crc kubenswrapper[5004]: I1208 18:57:23.304800 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:23 crc kubenswrapper[5004]: I1208 18:57:23.305178 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:23 crc kubenswrapper[5004]: I1208 18:57:23.359449 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:23 crc kubenswrapper[5004]: I1208 18:57:23.516525 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7rnm" Dec 08 18:57:25 crc kubenswrapper[5004]: I1208 18:57:25.088149 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:25 crc kubenswrapper[5004]: I1208 18:57:25.088196 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:25 crc kubenswrapper[5004]: I1208 18:57:25.124533 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:25 crc kubenswrapper[5004]: I1208 18:57:25.522257 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vjv2b" Dec 08 18:57:25 crc kubenswrapper[5004]: I1208 18:57:25.702677 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:25 crc kubenswrapper[5004]: I1208 18:57:25.702910 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:25 crc kubenswrapper[5004]: I1208 18:57:25.742521 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:26 crc kubenswrapper[5004]: I1208 18:57:26.523801 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6s4gp" Dec 08 18:57:27 crc kubenswrapper[5004]: I1208 18:57:27.500377 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:27 crc kubenswrapper[5004]: I1208 18:57:27.500731 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:27 crc kubenswrapper[5004]: I1208 18:57:27.551914 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:57:28 crc kubenswrapper[5004]: I1208 18:57:28.554221 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 18:58:31 crc kubenswrapper[5004]: I1208 18:58:31.000030 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:58:31 crc kubenswrapper[5004]: I1208 18:58:31.000579 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:59:01 crc kubenswrapper[5004]: I1208 18:59:01.000185 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:59:01 crc kubenswrapper[5004]: I1208 18:59:01.000894 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:59:23 crc kubenswrapper[5004]: E1208 18:59:23.447577 5004 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Dec 08 18:59:31 crc kubenswrapper[5004]: I1208 18:59:31.000159 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:59:31 crc kubenswrapper[5004]: I1208 18:59:31.000712 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:59:31 crc kubenswrapper[5004]: I1208 18:59:31.000755 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 18:59:31 crc kubenswrapper[5004]: I1208 18:59:31.001357 5004 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7a8989340f90bfb7d76010c674a653598e32c9027b446c9896f021c5afe48f1"} pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:59:31 crc kubenswrapper[5004]: I1208 18:59:31.001418 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" containerID="cri-o://d7a8989340f90bfb7d76010c674a653598e32c9027b446c9896f021c5afe48f1" gracePeriod=600 Dec 08 18:59:32 crc kubenswrapper[5004]: I1208 18:59:32.190566 5004 generic.go:358] "Generic (PLEG): container finished" podID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerID="d7a8989340f90bfb7d76010c674a653598e32c9027b446c9896f021c5afe48f1" exitCode=0 Dec 08 18:59:32 crc kubenswrapper[5004]: I1208 18:59:32.190673 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerDied","Data":"d7a8989340f90bfb7d76010c674a653598e32c9027b446c9896f021c5afe48f1"} Dec 08 18:59:32 crc kubenswrapper[5004]: I1208 18:59:32.190785 5004 scope.go:117] "RemoveContainer" containerID="aeeaf8c426d441fb729ffc2f1049f785259ca6b7e0ef2b9fe2cbdb0978a2ec65" Dec 08 18:59:33 crc kubenswrapper[5004]: I1208 18:59:33.200125 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerStarted","Data":"756d17bffa06f06addeab12143ba8c1f1794a66f155e593188473bf5f6da5c51"} Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.179175 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c"] Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.198212 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c"] Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.198384 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.201963 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.202456 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.291638 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/903ef75a-78df-4403-99b6-2d6456b21fd6-config-volume\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.291685 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/903ef75a-78df-4403-99b6-2d6456b21fd6-secret-volume\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.291762 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v4ts\" (UniqueName: \"kubernetes.io/projected/903ef75a-78df-4403-99b6-2d6456b21fd6-kube-api-access-2v4ts\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.392951 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2v4ts\" (UniqueName: \"kubernetes.io/projected/903ef75a-78df-4403-99b6-2d6456b21fd6-kube-api-access-2v4ts\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.393277 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/903ef75a-78df-4403-99b6-2d6456b21fd6-config-volume\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.393355 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/903ef75a-78df-4403-99b6-2d6456b21fd6-secret-volume\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.394254 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/903ef75a-78df-4403-99b6-2d6456b21fd6-config-volume\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.402035 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/903ef75a-78df-4403-99b6-2d6456b21fd6-secret-volume\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.413659 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v4ts\" (UniqueName: \"kubernetes.io/projected/903ef75a-78df-4403-99b6-2d6456b21fd6-kube-api-access-2v4ts\") pod \"collect-profiles-29420340-xtq2c\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.517058 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:00 crc kubenswrapper[5004]: I1208 19:00:00.759219 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c"] Dec 08 19:00:01 crc kubenswrapper[5004]: I1208 19:00:01.368757 5004 generic.go:358] "Generic (PLEG): container finished" podID="903ef75a-78df-4403-99b6-2d6456b21fd6" containerID="7121e4bf2cec954fd2b4b7c7daa58537ce13b6f6eb4aee9e538f5c7752de4493" exitCode=0 Dec 08 19:00:01 crc kubenswrapper[5004]: I1208 19:00:01.368824 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" event={"ID":"903ef75a-78df-4403-99b6-2d6456b21fd6","Type":"ContainerDied","Data":"7121e4bf2cec954fd2b4b7c7daa58537ce13b6f6eb4aee9e538f5c7752de4493"} Dec 08 19:00:01 crc kubenswrapper[5004]: I1208 19:00:01.370330 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" event={"ID":"903ef75a-78df-4403-99b6-2d6456b21fd6","Type":"ContainerStarted","Data":"fff506b28b4b7b01d86d61577c6748c944d2d5012f0f9cc02a936d8c326af182"} Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.589954 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.722794 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v4ts\" (UniqueName: \"kubernetes.io/projected/903ef75a-78df-4403-99b6-2d6456b21fd6-kube-api-access-2v4ts\") pod \"903ef75a-78df-4403-99b6-2d6456b21fd6\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.722883 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/903ef75a-78df-4403-99b6-2d6456b21fd6-config-volume\") pod \"903ef75a-78df-4403-99b6-2d6456b21fd6\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.722911 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/903ef75a-78df-4403-99b6-2d6456b21fd6-secret-volume\") pod \"903ef75a-78df-4403-99b6-2d6456b21fd6\" (UID: \"903ef75a-78df-4403-99b6-2d6456b21fd6\") " Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.724202 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/903ef75a-78df-4403-99b6-2d6456b21fd6-config-volume" (OuterVolumeSpecName: "config-volume") pod "903ef75a-78df-4403-99b6-2d6456b21fd6" (UID: "903ef75a-78df-4403-99b6-2d6456b21fd6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.732598 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/903ef75a-78df-4403-99b6-2d6456b21fd6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "903ef75a-78df-4403-99b6-2d6456b21fd6" (UID: "903ef75a-78df-4403-99b6-2d6456b21fd6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.732604 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903ef75a-78df-4403-99b6-2d6456b21fd6-kube-api-access-2v4ts" (OuterVolumeSpecName: "kube-api-access-2v4ts") pod "903ef75a-78df-4403-99b6-2d6456b21fd6" (UID: "903ef75a-78df-4403-99b6-2d6456b21fd6"). InnerVolumeSpecName "kube-api-access-2v4ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.825242 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2v4ts\" (UniqueName: \"kubernetes.io/projected/903ef75a-78df-4403-99b6-2d6456b21fd6-kube-api-access-2v4ts\") on node \"crc\" DevicePath \"\"" Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.825269 5004 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/903ef75a-78df-4403-99b6-2d6456b21fd6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:00:02 crc kubenswrapper[5004]: I1208 19:00:02.825277 5004 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/903ef75a-78df-4403-99b6-2d6456b21fd6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:00:03 crc kubenswrapper[5004]: I1208 19:00:03.383690 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" Dec 08 19:00:03 crc kubenswrapper[5004]: I1208 19:00:03.383705 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-xtq2c" event={"ID":"903ef75a-78df-4403-99b6-2d6456b21fd6","Type":"ContainerDied","Data":"fff506b28b4b7b01d86d61577c6748c944d2d5012f0f9cc02a936d8c326af182"} Dec 08 19:00:03 crc kubenswrapper[5004]: I1208 19:00:03.383738 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fff506b28b4b7b01d86d61577c6748c944d2d5012f0f9cc02a936d8c326af182" Dec 08 19:01:07 crc kubenswrapper[5004]: I1208 19:01:07.041832 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-wqg6t_5d3eaa17-c643-4536-88a0-a76854e545ab/openshift-config-operator/0.log" Dec 08 19:01:07 crc kubenswrapper[5004]: I1208 19:01:07.049012 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-wqg6t_5d3eaa17-c643-4536-88a0-a76854e545ab/openshift-config-operator/0.log" Dec 08 19:01:07 crc kubenswrapper[5004]: I1208 19:01:07.064780 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:01:07 crc kubenswrapper[5004]: I1208 19:01:07.068920 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:01:19 crc kubenswrapper[5004]: I1208 19:01:19.997528 5004 ???:1] "http: TLS handshake error from 192.168.126.11:40264: no serving certificate available for the kubelet" Dec 08 19:02:01 crc kubenswrapper[5004]: I1208 19:02:01.000509 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:02:01 crc kubenswrapper[5004]: I1208 19:02:01.001022 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:02:31 crc kubenswrapper[5004]: I1208 19:02:31.000227 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:02:31 crc kubenswrapper[5004]: I1208 19:02:31.001263 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.262658 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z"] Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.263580 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerName="kube-rbac-proxy" containerID="cri-o://4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.263692 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerName="ovnkube-cluster-manager" containerID="cri-o://5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.447690 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.465124 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-env-overrides\") pod \"02dfac61-6fa6-441d-83f2-c2f275a144e8\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.465198 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovn-control-plane-metrics-cert\") pod \"02dfac61-6fa6-441d-83f2-c2f275a144e8\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.465271 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l8m8\" (UniqueName: \"kubernetes.io/projected/02dfac61-6fa6-441d-83f2-c2f275a144e8-kube-api-access-8l8m8\") pod \"02dfac61-6fa6-441d-83f2-c2f275a144e8\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.465364 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovnkube-config\") pod \"02dfac61-6fa6-441d-83f2-c2f275a144e8\" (UID: \"02dfac61-6fa6-441d-83f2-c2f275a144e8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.466407 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "02dfac61-6fa6-441d-83f2-c2f275a144e8" (UID: "02dfac61-6fa6-441d-83f2-c2f275a144e8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.466659 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "02dfac61-6fa6-441d-83f2-c2f275a144e8" (UID: "02dfac61-6fa6-441d-83f2-c2f275a144e8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.483237 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc"] Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.483957 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerName="kube-rbac-proxy" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.483983 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerName="kube-rbac-proxy" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.484013 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="903ef75a-78df-4403-99b6-2d6456b21fd6" containerName="collect-profiles" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.484024 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="903ef75a-78df-4403-99b6-2d6456b21fd6" containerName="collect-profiles" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.484036 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerName="ovnkube-cluster-manager" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.484043 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerName="ovnkube-cluster-manager" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.484175 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerName="ovnkube-cluster-manager" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.484194 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerName="kube-rbac-proxy" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.484215 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="903ef75a-78df-4403-99b6-2d6456b21fd6" containerName="collect-profiles" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.488631 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.496591 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "02dfac61-6fa6-441d-83f2-c2f275a144e8" (UID: "02dfac61-6fa6-441d-83f2-c2f275a144e8"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.499573 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02dfac61-6fa6-441d-83f2-c2f275a144e8-kube-api-access-8l8m8" (OuterVolumeSpecName: "kube-api-access-8l8m8") pod "02dfac61-6fa6-441d-83f2-c2f275a144e8" (UID: "02dfac61-6fa6-441d-83f2-c2f275a144e8"). InnerVolumeSpecName "kube-api-access-8l8m8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.508609 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dmsk4"] Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.509159 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovn-controller" containerID="cri-o://440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.509235 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.509270 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="sbdb" containerID="cri-o://85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.509285 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kube-rbac-proxy-node" containerID="cri-o://5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.509333 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovn-acl-logging" containerID="cri-o://535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.509229 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="nbdb" containerID="cri-o://50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.509463 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="northd" containerID="cri-o://9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.544467 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovnkube-controller" containerID="cri-o://ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" gracePeriod=30 Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.567708 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.567824 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82frf\" (UniqueName: \"kubernetes.io/projected/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-kube-api-access-82frf\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.567868 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.567892 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.567958 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8l8m8\" (UniqueName: \"kubernetes.io/projected/02dfac61-6fa6-441d-83f2-c2f275a144e8-kube-api-access-8l8m8\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.567978 5004 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.567990 5004 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dfac61-6fa6-441d-83f2-c2f275a144e8-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.568002 5004 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dfac61-6fa6-441d-83f2-c2f275a144e8-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.668957 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82frf\" (UniqueName: \"kubernetes.io/projected/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-kube-api-access-82frf\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.669056 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.669111 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.669178 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.669878 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.670580 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.680539 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.687002 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82frf\" (UniqueName: \"kubernetes.io/projected/95c4fdb3-e6bb-4f46-bd0a-e80844c948ee-kube-api-access-82frf\") pod \"ovnkube-control-plane-97c9b6c48-cltnc\" (UID: \"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.834715 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.861745 5004 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.880530 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dmsk4_ea6c2cb7-5c47-47a3-b87e-fc8544207aa8/ovn-acl-logging/0.log" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.881492 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dmsk4_ea6c2cb7-5c47-47a3-b87e-fc8544207aa8/ovn-controller/0.log" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.882370 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.958881 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lmc4j"] Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959614 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovn-controller" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959638 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovn-controller" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959651 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovnkube-controller" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959659 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovnkube-controller" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959673 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="sbdb" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959679 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="sbdb" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959688 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959695 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959703 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="nbdb" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959708 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="nbdb" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959720 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kube-rbac-proxy-node" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959725 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kube-rbac-proxy-node" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959737 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovn-acl-logging" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959743 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovn-acl-logging" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959750 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kubecfg-setup" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959755 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kubecfg-setup" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959767 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="northd" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959772 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="northd" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959852 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovn-acl-logging" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959862 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovnkube-controller" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959874 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="ovn-controller" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959881 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="nbdb" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959888 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="sbdb" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959894 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="northd" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959901 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kube-rbac-proxy-node" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.959906 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.967673 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973421 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-etc-openvswitch\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973466 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-slash\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973501 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-kubelet\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973533 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-openvswitch\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973593 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-script-lib\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973630 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-bin\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973658 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-var-lib-openvswitch\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973685 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973720 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-ovn\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973739 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovn-node-metrics-cert\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973779 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6ntv\" (UniqueName: \"kubernetes.io/projected/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-kube-api-access-d6ntv\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973797 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-ovn-kubernetes\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973822 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-env-overrides\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973848 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-log-socket\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973907 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-systemd\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973928 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-systemd-units\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973946 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-netd\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.973976 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-config\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.974002 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-netns\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.974033 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-node-log\") pod \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\" (UID: \"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8\") " Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.974380 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-node-log" (OuterVolumeSpecName: "node-log") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.974418 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.974443 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-slash" (OuterVolumeSpecName: "host-slash") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.974465 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.974522 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.975310 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.975354 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.975379 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.975403 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.975428 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.978769 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.982641 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-kube-api-access-d6ntv" (OuterVolumeSpecName: "kube-api-access-d6ntv") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "kube-api-access-d6ntv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.982713 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.982741 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.983403 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.983447 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.983604 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.983721 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-log-socket" (OuterVolumeSpecName: "log-socket") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:40 crc kubenswrapper[5004]: I1208 19:02:40.983934 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.002276 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" (UID: "ea6c2cb7-5c47-47a3-b87e-fc8544207aa8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075028 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-etc-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075108 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075137 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-run-ovn-kubernetes\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075163 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-ovnkube-script-lib\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075192 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-kubelet\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075217 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-var-lib-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075248 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-slash\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075270 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-systemd\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075296 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-log-socket\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075328 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075354 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-ovnkube-config\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075379 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-node-log\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075402 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fznf5\" (UniqueName: \"kubernetes.io/projected/6552c0dc-d410-496c-ba60-6f2b5918557f-kube-api-access-fznf5\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075426 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-env-overrides\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075448 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-ovn\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075473 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-run-netns\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075493 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6552c0dc-d410-496c-ba60-6f2b5918557f-ovn-node-metrics-cert\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075539 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-cni-bin\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075579 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-cni-netd\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075604 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-systemd-units\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075655 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6ntv\" (UniqueName: \"kubernetes.io/projected/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-kube-api-access-d6ntv\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075670 5004 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075684 5004 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075695 5004 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075707 5004 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075718 5004 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075730 5004 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075741 5004 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075754 5004 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075766 5004 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075777 5004 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075789 5004 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075801 5004 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075813 5004 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075825 5004 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075837 5004 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075848 5004 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075860 5004 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075874 5004 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.075885 5004 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176759 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-var-lib-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176804 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-slash\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176825 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-systemd\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176848 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-log-socket\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176877 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176880 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-var-lib-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176893 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-ovnkube-config\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176949 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-node-log\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.176975 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fznf5\" (UniqueName: \"kubernetes.io/projected/6552c0dc-d410-496c-ba60-6f2b5918557f-kube-api-access-fznf5\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177001 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-env-overrides\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177022 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-ovn\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177042 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-run-netns\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177062 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6552c0dc-d410-496c-ba60-6f2b5918557f-ovn-node-metrics-cert\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177129 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-cni-bin\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177165 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-cni-netd\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177190 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-systemd-units\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177248 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-etc-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177276 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177301 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-run-ovn-kubernetes\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177325 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-ovnkube-script-lib\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177356 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-kubelet\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177426 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-kubelet\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177465 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-node-log\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177566 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-ovnkube-config\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177630 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-slash\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177661 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-systemd\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177691 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-log-socket\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177728 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177758 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-cni-netd\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177829 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-systemd-units\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177867 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-etc-openvswitch\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177898 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.177930 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-run-ovn-kubernetes\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.178189 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-env-overrides\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.178223 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-run-ovn\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.178274 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-cni-bin\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.178325 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6552c0dc-d410-496c-ba60-6f2b5918557f-host-run-netns\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.179292 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6552c0dc-d410-496c-ba60-6f2b5918557f-ovnkube-script-lib\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.181795 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6552c0dc-d410-496c-ba60-6f2b5918557f-ovn-node-metrics-cert\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.196869 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fznf5\" (UniqueName: \"kubernetes.io/projected/6552c0dc-d410-496c-ba60-6f2b5918557f-kube-api-access-fznf5\") pod \"ovnkube-node-lmc4j\" (UID: \"6552c0dc-d410-496c-ba60-6f2b5918557f\") " pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.229917 5004 generic.go:358] "Generic (PLEG): container finished" podID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerID="5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e" exitCode=0 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.229951 5004 generic.go:358] "Generic (PLEG): container finished" podID="02dfac61-6fa6-441d-83f2-c2f275a144e8" containerID="4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543" exitCode=0 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.229981 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" event={"ID":"02dfac61-6fa6-441d-83f2-c2f275a144e8","Type":"ContainerDied","Data":"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.230044 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.230061 5004 scope.go:117] "RemoveContainer" containerID="5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.230048 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" event={"ID":"02dfac61-6fa6-441d-83f2-c2f275a144e8","Type":"ContainerDied","Data":"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.230273 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z" event={"ID":"02dfac61-6fa6-441d-83f2-c2f275a144e8","Type":"ContainerDied","Data":"5a79e581d21599ec84be76f0a22bbf7585bc944571f9dcbdfd8069ed2238e0aa"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.233741 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" event={"ID":"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee","Type":"ContainerStarted","Data":"d2e4f5c81b69132bceb9d3042ea2c680166da0a7b4c8e267f52a30be6f16c16a"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.233801 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" event={"ID":"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee","Type":"ContainerStarted","Data":"37fcec9cb892282b2ff6e395dff6a2fa22ad82ce2c709a5216ef0ffb6415035e"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.236752 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxdkt_e00ae10b-1af7-4d7e-aad6-135dac0d2aa5/kube-multus/0.log" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.236796 5004 generic.go:358] "Generic (PLEG): container finished" podID="e00ae10b-1af7-4d7e-aad6-135dac0d2aa5" containerID="6002385cc01ae78d4d79b236983c1c75f317e016a323301fbef2d9d8c68325a6" exitCode=2 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.236929 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxdkt" event={"ID":"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5","Type":"ContainerDied","Data":"6002385cc01ae78d4d79b236983c1c75f317e016a323301fbef2d9d8c68325a6"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.237589 5004 scope.go:117] "RemoveContainer" containerID="6002385cc01ae78d4d79b236983c1c75f317e016a323301fbef2d9d8c68325a6" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.252515 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z"] Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.260726 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-c924z"] Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.261333 5004 scope.go:117] "RemoveContainer" containerID="4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.262660 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dmsk4_ea6c2cb7-5c47-47a3-b87e-fc8544207aa8/ovn-acl-logging/0.log" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263136 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dmsk4_ea6c2cb7-5c47-47a3-b87e-fc8544207aa8/ovn-controller/0.log" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263659 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" exitCode=0 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263686 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea" exitCode=0 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263695 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765" exitCode=0 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263703 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac" exitCode=0 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263710 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86" exitCode=0 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263717 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302" exitCode=0 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263724 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106" exitCode=143 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263731 5004 generic.go:358] "Generic (PLEG): container finished" podID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" containerID="440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f" exitCode=143 Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263865 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263892 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263904 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263917 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263928 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263941 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263954 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263964 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263970 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263977 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263985 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263990 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.263997 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264003 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264012 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264021 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264029 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264035 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264041 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264047 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264053 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264059 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264064 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264094 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264104 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264115 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264122 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264128 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264134 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264140 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264146 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264152 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264157 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264163 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264172 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" event={"ID":"ea6c2cb7-5c47-47a3-b87e-fc8544207aa8","Type":"ContainerDied","Data":"5a1975d5d45b392de9f069445261f1a3873605d34aad4915088531538c96380b"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264181 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264188 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264194 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264201 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264207 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264212 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264232 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264242 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264254 5004 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.264425 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dmsk4" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.291502 5004 scope.go:117] "RemoveContainer" containerID="5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.291974 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.295646 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e\": container with ID starting with 5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e not found: ID does not exist" containerID="5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.295980 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e"} err="failed to get container status \"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e\": rpc error: code = NotFound desc = could not find container \"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e\": container with ID starting with 5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.296007 5004 scope.go:117] "RemoveContainer" containerID="4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.296616 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543\": container with ID starting with 4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543 not found: ID does not exist" containerID="4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.296674 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543"} err="failed to get container status \"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543\": rpc error: code = NotFound desc = could not find container \"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543\": container with ID starting with 4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.296710 5004 scope.go:117] "RemoveContainer" containerID="5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.297256 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e"} err="failed to get container status \"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e\": rpc error: code = NotFound desc = could not find container \"5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e\": container with ID starting with 5ab9eb3772184564a246bf909fa63a65557b1f8410c1b0f685fb8f3ce8f6bd9e not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.297281 5004 scope.go:117] "RemoveContainer" containerID="4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.297590 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543"} err="failed to get container status \"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543\": rpc error: code = NotFound desc = could not find container \"4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543\": container with ID starting with 4f5101a289877a4d94e680bff87a51da0038ef2539b31154a6df031431627543 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.297612 5004 scope.go:117] "RemoveContainer" containerID="ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.321112 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dmsk4"] Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.325620 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dmsk4"] Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.336141 5004 scope.go:117] "RemoveContainer" containerID="85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.352331 5004 scope.go:117] "RemoveContainer" containerID="50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.374823 5004 scope.go:117] "RemoveContainer" containerID="9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.413789 5004 scope.go:117] "RemoveContainer" containerID="f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.458185 5004 scope.go:117] "RemoveContainer" containerID="5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.471755 5004 scope.go:117] "RemoveContainer" containerID="535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.495406 5004 scope.go:117] "RemoveContainer" containerID="440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.510514 5004 scope.go:117] "RemoveContainer" containerID="16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.530211 5004 scope.go:117] "RemoveContainer" containerID="ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.538831 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": container with ID starting with ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c not found: ID does not exist" containerID="ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.538910 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} err="failed to get container status \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": rpc error: code = NotFound desc = could not find container \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": container with ID starting with ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.538937 5004 scope.go:117] "RemoveContainer" containerID="85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.539403 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": container with ID starting with 85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea not found: ID does not exist" containerID="85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.539452 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} err="failed to get container status \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": rpc error: code = NotFound desc = could not find container \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": container with ID starting with 85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.539466 5004 scope.go:117] "RemoveContainer" containerID="50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.539992 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": container with ID starting with 50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765 not found: ID does not exist" containerID="50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.540049 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} err="failed to get container status \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": rpc error: code = NotFound desc = could not find container \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": container with ID starting with 50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.540063 5004 scope.go:117] "RemoveContainer" containerID="9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.540407 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": container with ID starting with 9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac not found: ID does not exist" containerID="9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.540461 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} err="failed to get container status \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": rpc error: code = NotFound desc = could not find container \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": container with ID starting with 9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.540501 5004 scope.go:117] "RemoveContainer" containerID="f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.540931 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": container with ID starting with f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86 not found: ID does not exist" containerID="f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.540968 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} err="failed to get container status \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": rpc error: code = NotFound desc = could not find container \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": container with ID starting with f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.540986 5004 scope.go:117] "RemoveContainer" containerID="5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.541264 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": container with ID starting with 5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302 not found: ID does not exist" containerID="5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.541303 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} err="failed to get container status \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": rpc error: code = NotFound desc = could not find container \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": container with ID starting with 5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.541320 5004 scope.go:117] "RemoveContainer" containerID="535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.541688 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": container with ID starting with 535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106 not found: ID does not exist" containerID="535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.541707 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} err="failed to get container status \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": rpc error: code = NotFound desc = could not find container \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": container with ID starting with 535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.541738 5004 scope.go:117] "RemoveContainer" containerID="440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.541906 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": container with ID starting with 440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f not found: ID does not exist" containerID="440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.541925 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} err="failed to get container status \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": rpc error: code = NotFound desc = could not find container \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": container with ID starting with 440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.541954 5004 scope.go:117] "RemoveContainer" containerID="16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323" Dec 08 19:02:41 crc kubenswrapper[5004]: E1208 19:02:41.542246 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": container with ID starting with 16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323 not found: ID does not exist" containerID="16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.542265 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} err="failed to get container status \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": rpc error: code = NotFound desc = could not find container \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": container with ID starting with 16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.542325 5004 scope.go:117] "RemoveContainer" containerID="ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.542570 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} err="failed to get container status \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": rpc error: code = NotFound desc = could not find container \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": container with ID starting with ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.542588 5004 scope.go:117] "RemoveContainer" containerID="85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.542792 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} err="failed to get container status \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": rpc error: code = NotFound desc = could not find container \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": container with ID starting with 85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.542820 5004 scope.go:117] "RemoveContainer" containerID="50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.543027 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} err="failed to get container status \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": rpc error: code = NotFound desc = could not find container \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": container with ID starting with 50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.543043 5004 scope.go:117] "RemoveContainer" containerID="9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.543262 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} err="failed to get container status \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": rpc error: code = NotFound desc = could not find container \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": container with ID starting with 9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.543279 5004 scope.go:117] "RemoveContainer" containerID="f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.543533 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} err="failed to get container status \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": rpc error: code = NotFound desc = could not find container \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": container with ID starting with f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.543550 5004 scope.go:117] "RemoveContainer" containerID="5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.543770 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} err="failed to get container status \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": rpc error: code = NotFound desc = could not find container \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": container with ID starting with 5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.543795 5004 scope.go:117] "RemoveContainer" containerID="535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.544281 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} err="failed to get container status \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": rpc error: code = NotFound desc = could not find container \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": container with ID starting with 535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.544311 5004 scope.go:117] "RemoveContainer" containerID="440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.544774 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} err="failed to get container status \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": rpc error: code = NotFound desc = could not find container \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": container with ID starting with 440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.544802 5004 scope.go:117] "RemoveContainer" containerID="16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.545021 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} err="failed to get container status \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": rpc error: code = NotFound desc = could not find container \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": container with ID starting with 16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.545041 5004 scope.go:117] "RemoveContainer" containerID="ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.545336 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} err="failed to get container status \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": rpc error: code = NotFound desc = could not find container \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": container with ID starting with ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.545355 5004 scope.go:117] "RemoveContainer" containerID="85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.545593 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} err="failed to get container status \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": rpc error: code = NotFound desc = could not find container \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": container with ID starting with 85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.545625 5004 scope.go:117] "RemoveContainer" containerID="50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.545942 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} err="failed to get container status \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": rpc error: code = NotFound desc = could not find container \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": container with ID starting with 50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.546200 5004 scope.go:117] "RemoveContainer" containerID="9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.547052 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} err="failed to get container status \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": rpc error: code = NotFound desc = could not find container \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": container with ID starting with 9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.547101 5004 scope.go:117] "RemoveContainer" containerID="f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.547753 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} err="failed to get container status \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": rpc error: code = NotFound desc = could not find container \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": container with ID starting with f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.547780 5004 scope.go:117] "RemoveContainer" containerID="5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.548359 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} err="failed to get container status \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": rpc error: code = NotFound desc = could not find container \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": container with ID starting with 5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.548404 5004 scope.go:117] "RemoveContainer" containerID="535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.548740 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} err="failed to get container status \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": rpc error: code = NotFound desc = could not find container \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": container with ID starting with 535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.548760 5004 scope.go:117] "RemoveContainer" containerID="440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.549425 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} err="failed to get container status \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": rpc error: code = NotFound desc = could not find container \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": container with ID starting with 440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.549481 5004 scope.go:117] "RemoveContainer" containerID="16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.550299 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} err="failed to get container status \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": rpc error: code = NotFound desc = could not find container \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": container with ID starting with 16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.550339 5004 scope.go:117] "RemoveContainer" containerID="ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.550707 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} err="failed to get container status \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": rpc error: code = NotFound desc = could not find container \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": container with ID starting with ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.550772 5004 scope.go:117] "RemoveContainer" containerID="85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.551378 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea"} err="failed to get container status \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": rpc error: code = NotFound desc = could not find container \"85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea\": container with ID starting with 85c720c18e77767f01d0cf527f41c97557733dc4836cd1a02b1ad30aa04e57ea not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.551407 5004 scope.go:117] "RemoveContainer" containerID="50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.551875 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765"} err="failed to get container status \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": rpc error: code = NotFound desc = could not find container \"50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765\": container with ID starting with 50cd7b606678036b0e4aceeb3aebb4180822ac5d0af5ffd1f5cd08e35c84d765 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.551901 5004 scope.go:117] "RemoveContainer" containerID="9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.552264 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac"} err="failed to get container status \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": rpc error: code = NotFound desc = could not find container \"9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac\": container with ID starting with 9dd5cfd70865f4a8dff1c8e08aff9c6774011f555b33809efe5f06ebf89570ac not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.552281 5004 scope.go:117] "RemoveContainer" containerID="f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.554244 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86"} err="failed to get container status \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": rpc error: code = NotFound desc = could not find container \"f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86\": container with ID starting with f590299cf8af9dc6ce43a73966948f56dffa6a066fb2a61d6963c4f2e9970e86 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.554273 5004 scope.go:117] "RemoveContainer" containerID="5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.555376 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302"} err="failed to get container status \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": rpc error: code = NotFound desc = could not find container \"5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302\": container with ID starting with 5d58da8459360edf6a8078b445e7d3baf0596201bcca332a7a7aed2063cba302 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.555458 5004 scope.go:117] "RemoveContainer" containerID="535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.556001 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106"} err="failed to get container status \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": rpc error: code = NotFound desc = could not find container \"535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106\": container with ID starting with 535ea69f6fee6b52990fd1a7c8d1dd92bb2def7bc9443c30b579c515e6597106 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.556026 5004 scope.go:117] "RemoveContainer" containerID="440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.556616 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f"} err="failed to get container status \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": rpc error: code = NotFound desc = could not find container \"440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f\": container with ID starting with 440a2669ceb118c499d037606ec43c22936a21090d28d440923c24c621d0724f not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.556662 5004 scope.go:117] "RemoveContainer" containerID="16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.557414 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323"} err="failed to get container status \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": rpc error: code = NotFound desc = could not find container \"16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323\": container with ID starting with 16ef3121c2862aea82c0c98d40d65382724ebeb585b5ee5d2692bab1c22ce323 not found: ID does not exist" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.557442 5004 scope.go:117] "RemoveContainer" containerID="ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c" Dec 08 19:02:41 crc kubenswrapper[5004]: I1208 19:02:41.557885 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c"} err="failed to get container status \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": rpc error: code = NotFound desc = could not find container \"ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c\": container with ID starting with ac6e1ce33d78e33bb0de97e1e7fce0d448a0767a066421d12c7ed71bc7b2117c not found: ID does not exist" Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.273875 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" event={"ID":"95c4fdb3-e6bb-4f46-bd0a-e80844c948ee","Type":"ContainerStarted","Data":"13ca68ba0a4708920bb9e631517eafdf252602459e82edaba00ac230d3daf785"} Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.276110 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxdkt_e00ae10b-1af7-4d7e-aad6-135dac0d2aa5/kube-multus/0.log" Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.276260 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxdkt" event={"ID":"e00ae10b-1af7-4d7e-aad6-135dac0d2aa5","Type":"ContainerStarted","Data":"0cc84fd108f1680362cf689ab1af3338c4399271361980f178e96e6b5f6df270"} Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.277712 5004 generic.go:358] "Generic (PLEG): container finished" podID="6552c0dc-d410-496c-ba60-6f2b5918557f" containerID="5068e60c9a4dbe464337c9b2ce7ac16d2f68c425c320fb32f611f7279c0ce74e" exitCode=0 Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.277775 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerDied","Data":"5068e60c9a4dbe464337c9b2ce7ac16d2f68c425c320fb32f611f7279c0ce74e"} Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.280727 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"daa37729c8530f74d70915cf99c934c3ac2ae835abd31dacc1b6de27fba70682"} Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.291714 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-cltnc" podStartSLOduration=2.291693145 podStartE2EDuration="2.291693145s" podCreationTimestamp="2025-12-08 19:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:02:42.288444175 +0000 UTC m=+695.937352483" watchObservedRunningTime="2025-12-08 19:02:42.291693145 +0000 UTC m=+695.940601453" Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.718842 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02dfac61-6fa6-441d-83f2-c2f275a144e8" path="/var/lib/kubelet/pods/02dfac61-6fa6-441d-83f2-c2f275a144e8/volumes" Dec 08 19:02:42 crc kubenswrapper[5004]: I1208 19:02:42.720045 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea6c2cb7-5c47-47a3-b87e-fc8544207aa8" path="/var/lib/kubelet/pods/ea6c2cb7-5c47-47a3-b87e-fc8544207aa8/volumes" Dec 08 19:02:43 crc kubenswrapper[5004]: I1208 19:02:43.291943 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"ef24e641ead60874220e08b242d81d0dc9c23e741410079311f66595f1750b90"} Dec 08 19:02:43 crc kubenswrapper[5004]: I1208 19:02:43.291986 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"9d287c367c00e11a34a09afb4c18f14207c96a53799a5c9f92293628f2b79be9"} Dec 08 19:02:43 crc kubenswrapper[5004]: I1208 19:02:43.292000 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"00d2e5a3c42964c0fe43a638251a835d43bf962eb798ca56c7d68de944a7e469"} Dec 08 19:02:43 crc kubenswrapper[5004]: I1208 19:02:43.292014 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"271ebebfa1ba7f5a104f951b53beb845676a49abc79e25d73f956107ea7d8351"} Dec 08 19:02:43 crc kubenswrapper[5004]: I1208 19:02:43.292024 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"e78149d108cff7d9b3f90e65458291592eb33b46d543e840eb8d62ecc0c0fb60"} Dec 08 19:02:43 crc kubenswrapper[5004]: I1208 19:02:43.292034 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"93eff758a04b6a59f45d2b15abfe8f4518dd6fe423ab782c984994a7530bd56c"} Dec 08 19:02:46 crc kubenswrapper[5004]: I1208 19:02:46.310064 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"1100a7876f24ff937b72e4400bc7a94fb055fa7669e1bf3fe21971530bfac25f"} Dec 08 19:02:49 crc kubenswrapper[5004]: I1208 19:02:49.329712 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" event={"ID":"6552c0dc-d410-496c-ba60-6f2b5918557f","Type":"ContainerStarted","Data":"45b70ba14dd28c919e7b68eff8de412d1103fd8b464e66e6b92369941e1c8cea"} Dec 08 19:02:49 crc kubenswrapper[5004]: I1208 19:02:49.330261 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:49 crc kubenswrapper[5004]: I1208 19:02:49.330278 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:49 crc kubenswrapper[5004]: I1208 19:02:49.330289 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:49 crc kubenswrapper[5004]: I1208 19:02:49.369259 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:02:49 crc kubenswrapper[5004]: I1208 19:02:49.378252 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" podStartSLOduration=9.378236833 podStartE2EDuration="9.378236833s" podCreationTimestamp="2025-12-08 19:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:02:49.368957337 +0000 UTC m=+703.017865645" watchObservedRunningTime="2025-12-08 19:02:49.378236833 +0000 UTC m=+703.027145161" Dec 08 19:02:49 crc kubenswrapper[5004]: I1208 19:02:49.379756 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:03:01 crc kubenswrapper[5004]: I1208 19:03:01.000923 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:03:01 crc kubenswrapper[5004]: I1208 19:03:01.002165 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:03:01 crc kubenswrapper[5004]: I1208 19:03:01.002258 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 19:03:01 crc kubenswrapper[5004]: I1208 19:03:01.003293 5004 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"756d17bffa06f06addeab12143ba8c1f1794a66f155e593188473bf5f6da5c51"} pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:03:01 crc kubenswrapper[5004]: I1208 19:03:01.003392 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" containerID="cri-o://756d17bffa06f06addeab12143ba8c1f1794a66f155e593188473bf5f6da5c51" gracePeriod=600 Dec 08 19:03:02 crc kubenswrapper[5004]: I1208 19:03:02.406497 5004 generic.go:358] "Generic (PLEG): container finished" podID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerID="756d17bffa06f06addeab12143ba8c1f1794a66f155e593188473bf5f6da5c51" exitCode=0 Dec 08 19:03:02 crc kubenswrapper[5004]: I1208 19:03:02.406593 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerDied","Data":"756d17bffa06f06addeab12143ba8c1f1794a66f155e593188473bf5f6da5c51"} Dec 08 19:03:02 crc kubenswrapper[5004]: I1208 19:03:02.407122 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerStarted","Data":"2a43ca7d951e3eaaf8b745ab9b98e0838967e3dd8006f2c846fff37931e0b973"} Dec 08 19:03:02 crc kubenswrapper[5004]: I1208 19:03:02.407150 5004 scope.go:117] "RemoveContainer" containerID="d7a8989340f90bfb7d76010c674a653598e32c9027b446c9896f021c5afe48f1" Dec 08 19:03:21 crc kubenswrapper[5004]: I1208 19:03:21.370123 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lmc4j" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.641603 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7269t"] Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.650851 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.673088 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7269t"] Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.750818 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-catalog-content\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.750894 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-utilities\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.750990 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnmc\" (UniqueName: \"kubernetes.io/projected/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-kube-api-access-hmnmc\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.851753 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-catalog-content\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.851811 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-utilities\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.851888 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hmnmc\" (UniqueName: \"kubernetes.io/projected/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-kube-api-access-hmnmc\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.852818 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-catalog-content\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.853182 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-utilities\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:23 crc kubenswrapper[5004]: I1208 19:03:23.891570 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmnmc\" (UniqueName: \"kubernetes.io/projected/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-kube-api-access-hmnmc\") pod \"certified-operators-7269t\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:24 crc kubenswrapper[5004]: I1208 19:03:24.015524 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:24 crc kubenswrapper[5004]: I1208 19:03:24.254933 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7269t"] Dec 08 19:03:24 crc kubenswrapper[5004]: I1208 19:03:24.528266 5004 generic.go:358] "Generic (PLEG): container finished" podID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerID="788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a" exitCode=0 Dec 08 19:03:24 crc kubenswrapper[5004]: I1208 19:03:24.528339 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7269t" event={"ID":"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7","Type":"ContainerDied","Data":"788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a"} Dec 08 19:03:24 crc kubenswrapper[5004]: I1208 19:03:24.528367 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7269t" event={"ID":"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7","Type":"ContainerStarted","Data":"2d2ef6ca8f89b40234cc009ccea652009097f6bae7497346fa8e305b82a360dd"} Dec 08 19:03:25 crc kubenswrapper[5004]: I1208 19:03:25.536336 5004 generic.go:358] "Generic (PLEG): container finished" podID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerID="4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393" exitCode=0 Dec 08 19:03:25 crc kubenswrapper[5004]: I1208 19:03:25.536508 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7269t" event={"ID":"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7","Type":"ContainerDied","Data":"4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393"} Dec 08 19:03:26 crc kubenswrapper[5004]: I1208 19:03:26.544773 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7269t" event={"ID":"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7","Type":"ContainerStarted","Data":"67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451"} Dec 08 19:03:26 crc kubenswrapper[5004]: I1208 19:03:26.564346 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7269t" podStartSLOduration=3.036134833 podStartE2EDuration="3.564329244s" podCreationTimestamp="2025-12-08 19:03:23 +0000 UTC" firstStartedPulling="2025-12-08 19:03:24.529220808 +0000 UTC m=+738.178129116" lastFinishedPulling="2025-12-08 19:03:25.057415219 +0000 UTC m=+738.706323527" observedRunningTime="2025-12-08 19:03:26.563121165 +0000 UTC m=+740.212029473" watchObservedRunningTime="2025-12-08 19:03:26.564329244 +0000 UTC m=+740.213237552" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.417245 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nhld8"] Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.447358 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhld8"] Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.447557 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.607362 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-catalog-content\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.607659 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-utilities\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.607779 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zzp\" (UniqueName: \"kubernetes.io/projected/b21bc27a-96ee-45fe-9a85-e10206b5c993-kube-api-access-c4zzp\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.708976 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-catalog-content\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.709036 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-utilities\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.709061 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zzp\" (UniqueName: \"kubernetes.io/projected/b21bc27a-96ee-45fe-9a85-e10206b5c993-kube-api-access-c4zzp\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.709878 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-catalog-content\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.710255 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-utilities\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.731853 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zzp\" (UniqueName: \"kubernetes.io/projected/b21bc27a-96ee-45fe-9a85-e10206b5c993-kube-api-access-c4zzp\") pod \"redhat-operators-nhld8\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:28 crc kubenswrapper[5004]: I1208 19:03:28.768589 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:29 crc kubenswrapper[5004]: I1208 19:03:29.004933 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhld8"] Dec 08 19:03:29 crc kubenswrapper[5004]: I1208 19:03:29.564033 5004 generic.go:358] "Generic (PLEG): container finished" podID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerID="28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1" exitCode=0 Dec 08 19:03:29 crc kubenswrapper[5004]: I1208 19:03:29.564214 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhld8" event={"ID":"b21bc27a-96ee-45fe-9a85-e10206b5c993","Type":"ContainerDied","Data":"28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1"} Dec 08 19:03:29 crc kubenswrapper[5004]: I1208 19:03:29.564411 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhld8" event={"ID":"b21bc27a-96ee-45fe-9a85-e10206b5c993","Type":"ContainerStarted","Data":"ece0dd14f17b89e530ba76ac90b45b1e0af921cec579969e35efdde81300b7a1"} Dec 08 19:03:30 crc kubenswrapper[5004]: I1208 19:03:30.573763 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhld8" event={"ID":"b21bc27a-96ee-45fe-9a85-e10206b5c993","Type":"ContainerStarted","Data":"2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92"} Dec 08 19:03:31 crc kubenswrapper[5004]: I1208 19:03:31.581707 5004 generic.go:358] "Generic (PLEG): container finished" podID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerID="2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92" exitCode=0 Dec 08 19:03:31 crc kubenswrapper[5004]: I1208 19:03:31.581881 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhld8" event={"ID":"b21bc27a-96ee-45fe-9a85-e10206b5c993","Type":"ContainerDied","Data":"2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92"} Dec 08 19:03:32 crc kubenswrapper[5004]: I1208 19:03:32.598253 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhld8" event={"ID":"b21bc27a-96ee-45fe-9a85-e10206b5c993","Type":"ContainerStarted","Data":"8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a"} Dec 08 19:03:34 crc kubenswrapper[5004]: I1208 19:03:34.016065 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:34 crc kubenswrapper[5004]: I1208 19:03:34.016397 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:34 crc kubenswrapper[5004]: I1208 19:03:34.061932 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:34 crc kubenswrapper[5004]: I1208 19:03:34.081738 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nhld8" podStartSLOduration=5.457175176 podStartE2EDuration="6.081722256s" podCreationTimestamp="2025-12-08 19:03:28 +0000 UTC" firstStartedPulling="2025-12-08 19:03:29.565360071 +0000 UTC m=+743.214268379" lastFinishedPulling="2025-12-08 19:03:30.189907151 +0000 UTC m=+743.838815459" observedRunningTime="2025-12-08 19:03:32.71267797 +0000 UTC m=+746.361586278" watchObservedRunningTime="2025-12-08 19:03:34.081722256 +0000 UTC m=+747.730630564" Dec 08 19:03:34 crc kubenswrapper[5004]: I1208 19:03:34.643096 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:35 crc kubenswrapper[5004]: I1208 19:03:35.211990 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7269t"] Dec 08 19:03:36 crc kubenswrapper[5004]: I1208 19:03:36.619181 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7269t" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerName="registry-server" containerID="cri-o://67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451" gracePeriod=2 Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.511367 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.520696 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-catalog-content\") pod \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.520777 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-utilities\") pod \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.520899 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmnmc\" (UniqueName: \"kubernetes.io/projected/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-kube-api-access-hmnmc\") pod \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\" (UID: \"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7\") " Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.521977 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-utilities" (OuterVolumeSpecName: "utilities") pod "f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" (UID: "f3e4ab93-9bcf-4e3f-9923-01c89fa628a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.533536 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-kube-api-access-hmnmc" (OuterVolumeSpecName: "kube-api-access-hmnmc") pod "f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" (UID: "f3e4ab93-9bcf-4e3f-9923-01c89fa628a7"). InnerVolumeSpecName "kube-api-access-hmnmc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.564988 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" (UID: "f3e4ab93-9bcf-4e3f-9923-01c89fa628a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.624824 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.625604 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hmnmc\" (UniqueName: \"kubernetes.io/projected/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-kube-api-access-hmnmc\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.625694 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.630439 5004 generic.go:358] "Generic (PLEG): container finished" podID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerID="67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451" exitCode=0 Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.630706 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7269t" event={"ID":"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7","Type":"ContainerDied","Data":"67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451"} Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.630781 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7269t" event={"ID":"f3e4ab93-9bcf-4e3f-9923-01c89fa628a7","Type":"ContainerDied","Data":"2d2ef6ca8f89b40234cc009ccea652009097f6bae7497346fa8e305b82a360dd"} Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.630802 5004 scope.go:117] "RemoveContainer" containerID="67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.630953 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7269t" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.648816 5004 scope.go:117] "RemoveContainer" containerID="4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.671698 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7269t"] Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.677958 5004 scope.go:117] "RemoveContainer" containerID="788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.679247 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7269t"] Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.696186 5004 scope.go:117] "RemoveContainer" containerID="67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451" Dec 08 19:03:37 crc kubenswrapper[5004]: E1208 19:03:37.696800 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451\": container with ID starting with 67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451 not found: ID does not exist" containerID="67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.696855 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451"} err="failed to get container status \"67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451\": rpc error: code = NotFound desc = could not find container \"67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451\": container with ID starting with 67316a6c1a4790227d03daaa1b96ca6101074ca4e538bd48576911323f025451 not found: ID does not exist" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.696888 5004 scope.go:117] "RemoveContainer" containerID="4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393" Dec 08 19:03:37 crc kubenswrapper[5004]: E1208 19:03:37.697322 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393\": container with ID starting with 4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393 not found: ID does not exist" containerID="4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.697456 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393"} err="failed to get container status \"4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393\": rpc error: code = NotFound desc = could not find container \"4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393\": container with ID starting with 4bfb309e0ec2741c2ba142619ad374f729c1b901e3fb1c7c7fd3832ca339b393 not found: ID does not exist" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.697631 5004 scope.go:117] "RemoveContainer" containerID="788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a" Dec 08 19:03:37 crc kubenswrapper[5004]: E1208 19:03:37.698193 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a\": container with ID starting with 788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a not found: ID does not exist" containerID="788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a" Dec 08 19:03:37 crc kubenswrapper[5004]: I1208 19:03:37.698220 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a"} err="failed to get container status \"788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a\": rpc error: code = NotFound desc = could not find container \"788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a\": container with ID starting with 788892338da3977dd41e9c8ca72070557689639ccdae0efa3725f2932970269a not found: ID does not exist" Dec 08 19:03:38 crc kubenswrapper[5004]: I1208 19:03:38.717618 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" path="/var/lib/kubelet/pods/f3e4ab93-9bcf-4e3f-9923-01c89fa628a7/volumes" Dec 08 19:03:38 crc kubenswrapper[5004]: I1208 19:03:38.769615 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:38 crc kubenswrapper[5004]: I1208 19:03:38.769666 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:38 crc kubenswrapper[5004]: I1208 19:03:38.809716 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:39 crc kubenswrapper[5004]: I1208 19:03:39.684099 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:40 crc kubenswrapper[5004]: I1208 19:03:40.608301 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhld8"] Dec 08 19:03:41 crc kubenswrapper[5004]: I1208 19:03:41.655533 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nhld8" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerName="registry-server" containerID="cri-o://8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a" gracePeriod=2 Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.204482 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.293980 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-catalog-content\") pod \"b21bc27a-96ee-45fe-9a85-e10206b5c993\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.294032 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-utilities\") pod \"b21bc27a-96ee-45fe-9a85-e10206b5c993\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.294052 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4zzp\" (UniqueName: \"kubernetes.io/projected/b21bc27a-96ee-45fe-9a85-e10206b5c993-kube-api-access-c4zzp\") pod \"b21bc27a-96ee-45fe-9a85-e10206b5c993\" (UID: \"b21bc27a-96ee-45fe-9a85-e10206b5c993\") " Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.296208 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-utilities" (OuterVolumeSpecName: "utilities") pod "b21bc27a-96ee-45fe-9a85-e10206b5c993" (UID: "b21bc27a-96ee-45fe-9a85-e10206b5c993"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.300727 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21bc27a-96ee-45fe-9a85-e10206b5c993-kube-api-access-c4zzp" (OuterVolumeSpecName: "kube-api-access-c4zzp") pod "b21bc27a-96ee-45fe-9a85-e10206b5c993" (UID: "b21bc27a-96ee-45fe-9a85-e10206b5c993"). InnerVolumeSpecName "kube-api-access-c4zzp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.394889 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.395234 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c4zzp\" (UniqueName: \"kubernetes.io/projected/b21bc27a-96ee-45fe-9a85-e10206b5c993-kube-api-access-c4zzp\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.398845 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b21bc27a-96ee-45fe-9a85-e10206b5c993" (UID: "b21bc27a-96ee-45fe-9a85-e10206b5c993"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.496229 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21bc27a-96ee-45fe-9a85-e10206b5c993-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.668647 5004 generic.go:358] "Generic (PLEG): container finished" podID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerID="8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a" exitCode=0 Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.668722 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhld8" event={"ID":"b21bc27a-96ee-45fe-9a85-e10206b5c993","Type":"ContainerDied","Data":"8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a"} Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.668760 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhld8" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.668846 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhld8" event={"ID":"b21bc27a-96ee-45fe-9a85-e10206b5c993","Type":"ContainerDied","Data":"ece0dd14f17b89e530ba76ac90b45b1e0af921cec579969e35efdde81300b7a1"} Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.668899 5004 scope.go:117] "RemoveContainer" containerID="8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.685264 5004 scope.go:117] "RemoveContainer" containerID="2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.714225 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhld8"] Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.717011 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nhld8"] Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.727818 5004 scope.go:117] "RemoveContainer" containerID="28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.758333 5004 scope.go:117] "RemoveContainer" containerID="8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a" Dec 08 19:03:43 crc kubenswrapper[5004]: E1208 19:03:43.759039 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a\": container with ID starting with 8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a not found: ID does not exist" containerID="8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.759147 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a"} err="failed to get container status \"8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a\": rpc error: code = NotFound desc = could not find container \"8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a\": container with ID starting with 8b29b7002cf06f2c60b3b3c1901302fcabc3b0e07be59e7047064e5adbc7f93a not found: ID does not exist" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.759179 5004 scope.go:117] "RemoveContainer" containerID="2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92" Dec 08 19:03:43 crc kubenswrapper[5004]: E1208 19:03:43.759536 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92\": container with ID starting with 2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92 not found: ID does not exist" containerID="2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.759565 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92"} err="failed to get container status \"2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92\": rpc error: code = NotFound desc = could not find container \"2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92\": container with ID starting with 2097b85ecd80eb928dc68051fed1f5e7cdba1b21a9b30f9a29910bf7b4f01a92 not found: ID does not exist" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.759583 5004 scope.go:117] "RemoveContainer" containerID="28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1" Dec 08 19:03:43 crc kubenswrapper[5004]: E1208 19:03:43.759861 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1\": container with ID starting with 28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1 not found: ID does not exist" containerID="28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1" Dec 08 19:03:43 crc kubenswrapper[5004]: I1208 19:03:43.759883 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1"} err="failed to get container status \"28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1\": rpc error: code = NotFound desc = could not find container \"28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1\": container with ID starting with 28ce5cad9d399095439c5a175ae4d75fde6927cbad6f7540b2938bd44b4456a1 not found: ID does not exist" Dec 08 19:03:44 crc kubenswrapper[5004]: I1208 19:03:44.717432 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" path="/var/lib/kubelet/pods/b21bc27a-96ee-45fe-9a85-e10206b5c993/volumes" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.018201 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bnqql"] Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.018963 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerName="extract-utilities" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.018978 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerName="extract-utilities" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.018991 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerName="extract-content" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.018998 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerName="extract-content" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019016 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerName="extract-content" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019024 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerName="extract-content" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019038 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerName="registry-server" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019044 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerName="registry-server" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019059 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerName="registry-server" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019066 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerName="registry-server" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019245 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerName="extract-utilities" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019267 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerName="extract-utilities" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019397 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="f3e4ab93-9bcf-4e3f-9923-01c89fa628a7" containerName="registry-server" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.019414 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="b21bc27a-96ee-45fe-9a85-e10206b5c993" containerName="registry-server" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.044650 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnqql"] Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.044833 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.139458 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6gjq\" (UniqueName: \"kubernetes.io/projected/8b1c6582-0754-465a-a55c-6e9d968d77e7-kube-api-access-t6gjq\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.139542 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-catalog-content\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.139827 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-utilities\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.241589 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-utilities\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.241808 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6gjq\" (UniqueName: \"kubernetes.io/projected/8b1c6582-0754-465a-a55c-6e9d968d77e7-kube-api-access-t6gjq\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.242006 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-catalog-content\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.242296 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-utilities\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.242530 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-catalog-content\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.267065 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6gjq\" (UniqueName: \"kubernetes.io/projected/8b1c6582-0754-465a-a55c-6e9d968d77e7-kube-api-access-t6gjq\") pod \"community-operators-bnqql\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.378513 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:46 crc kubenswrapper[5004]: I1208 19:03:46.871810 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnqql"] Dec 08 19:03:47 crc kubenswrapper[5004]: I1208 19:03:47.690687 5004 generic.go:358] "Generic (PLEG): container finished" podID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerID="7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739" exitCode=0 Dec 08 19:03:47 crc kubenswrapper[5004]: I1208 19:03:47.690834 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqql" event={"ID":"8b1c6582-0754-465a-a55c-6e9d968d77e7","Type":"ContainerDied","Data":"7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739"} Dec 08 19:03:47 crc kubenswrapper[5004]: I1208 19:03:47.690899 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqql" event={"ID":"8b1c6582-0754-465a-a55c-6e9d968d77e7","Type":"ContainerStarted","Data":"abe002c8c779ab7c69411f0a6f3bafb7ea6052ffa8e4b0515edb60d8e9108ed4"} Dec 08 19:03:48 crc kubenswrapper[5004]: I1208 19:03:48.700838 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqql" event={"ID":"8b1c6582-0754-465a-a55c-6e9d968d77e7","Type":"ContainerStarted","Data":"42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96"} Dec 08 19:03:49 crc kubenswrapper[5004]: I1208 19:03:49.716626 5004 generic.go:358] "Generic (PLEG): container finished" podID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerID="42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96" exitCode=0 Dec 08 19:03:49 crc kubenswrapper[5004]: I1208 19:03:49.716670 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqql" event={"ID":"8b1c6582-0754-465a-a55c-6e9d968d77e7","Type":"ContainerDied","Data":"42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96"} Dec 08 19:03:50 crc kubenswrapper[5004]: I1208 19:03:50.725280 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqql" event={"ID":"8b1c6582-0754-465a-a55c-6e9d968d77e7","Type":"ContainerStarted","Data":"2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1"} Dec 08 19:03:50 crc kubenswrapper[5004]: I1208 19:03:50.747035 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bnqql" podStartSLOduration=3.995587412 podStartE2EDuration="4.747020236s" podCreationTimestamp="2025-12-08 19:03:46 +0000 UTC" firstStartedPulling="2025-12-08 19:03:47.708415372 +0000 UTC m=+761.357323670" lastFinishedPulling="2025-12-08 19:03:48.459848186 +0000 UTC m=+762.108756494" observedRunningTime="2025-12-08 19:03:50.745713895 +0000 UTC m=+764.394622203" watchObservedRunningTime="2025-12-08 19:03:50.747020236 +0000 UTC m=+764.395928564" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.116484 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjp9l"] Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.116807 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sjp9l" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerName="registry-server" containerID="cri-o://69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd" gracePeriod=30 Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.421443 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.519526 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-utilities\") pod \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.519655 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2r8g\" (UniqueName: \"kubernetes.io/projected/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-kube-api-access-r2r8g\") pod \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.519691 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-catalog-content\") pod \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\" (UID: \"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f\") " Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.520831 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-utilities" (OuterVolumeSpecName: "utilities") pod "17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" (UID: "17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.525789 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-kube-api-access-r2r8g" (OuterVolumeSpecName: "kube-api-access-r2r8g") pod "17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" (UID: "17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f"). InnerVolumeSpecName "kube-api-access-r2r8g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.541969 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" (UID: "17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.620894 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r2r8g\" (UniqueName: \"kubernetes.io/projected/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-kube-api-access-r2r8g\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.620940 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.620953 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.737515 5004 generic.go:358] "Generic (PLEG): container finished" podID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerID="69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd" exitCode=0 Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.737590 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjp9l" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.737589 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjp9l" event={"ID":"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f","Type":"ContainerDied","Data":"69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd"} Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.737635 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjp9l" event={"ID":"17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f","Type":"ContainerDied","Data":"6d9ffce6c346bcc7015f507602ca74d4c4493d2f9773b2ce22d5090092f63c57"} Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.737658 5004 scope.go:117] "RemoveContainer" containerID="69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.762240 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjp9l"] Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.765618 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjp9l"] Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.765808 5004 scope.go:117] "RemoveContainer" containerID="926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.779657 5004 scope.go:117] "RemoveContainer" containerID="25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.796319 5004 scope.go:117] "RemoveContainer" containerID="69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd" Dec 08 19:03:52 crc kubenswrapper[5004]: E1208 19:03:52.796854 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd\": container with ID starting with 69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd not found: ID does not exist" containerID="69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.796902 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd"} err="failed to get container status \"69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd\": rpc error: code = NotFound desc = could not find container \"69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd\": container with ID starting with 69ce78b86449367398b1623103c51e0a8e1deec81d4c1525eaa027020dfdcefd not found: ID does not exist" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.796934 5004 scope.go:117] "RemoveContainer" containerID="926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862" Dec 08 19:03:52 crc kubenswrapper[5004]: E1208 19:03:52.797347 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862\": container with ID starting with 926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862 not found: ID does not exist" containerID="926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.797397 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862"} err="failed to get container status \"926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862\": rpc error: code = NotFound desc = could not find container \"926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862\": container with ID starting with 926557f3889305c049033e30885b18fd95741b1a6411087e9eb0bcceff0b2862 not found: ID does not exist" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.797418 5004 scope.go:117] "RemoveContainer" containerID="25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a" Dec 08 19:03:52 crc kubenswrapper[5004]: E1208 19:03:52.797730 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a\": container with ID starting with 25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a not found: ID does not exist" containerID="25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a" Dec 08 19:03:52 crc kubenswrapper[5004]: I1208 19:03:52.797761 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a"} err="failed to get container status \"25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a\": rpc error: code = NotFound desc = could not find container \"25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a\": container with ID starting with 25ef231521272a74eea8fa427eec7ceef847d14bf3fdc1bccf01c116b5d62f9a not found: ID does not exist" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.218381 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rsgl2"] Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.219380 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerName="extract-utilities" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.219398 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerName="extract-utilities" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.219414 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerName="registry-server" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.219422 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerName="registry-server" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.219435 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerName="extract-content" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.219444 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerName="extract-content" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.219581 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" containerName="registry-server" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.236940 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.252213 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rsgl2"] Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.331549 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/815bf5be-bf5a-4956-8462-b423e4c6dd86-registry-certificates\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.331597 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4xdz\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-kube-api-access-l4xdz\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.331637 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/815bf5be-bf5a-4956-8462-b423e4c6dd86-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.331659 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.331699 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/815bf5be-bf5a-4956-8462-b423e4c6dd86-trusted-ca\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.331745 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/815bf5be-bf5a-4956-8462-b423e4c6dd86-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.331782 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.331813 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-registry-tls\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.367519 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.432958 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/815bf5be-bf5a-4956-8462-b423e4c6dd86-registry-certificates\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.433019 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4xdz\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-kube-api-access-l4xdz\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.433286 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/815bf5be-bf5a-4956-8462-b423e4c6dd86-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.433352 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.433434 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/815bf5be-bf5a-4956-8462-b423e4c6dd86-trusted-ca\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.433515 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/815bf5be-bf5a-4956-8462-b423e4c6dd86-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.433592 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-registry-tls\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.433819 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/815bf5be-bf5a-4956-8462-b423e4c6dd86-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.434490 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/815bf5be-bf5a-4956-8462-b423e4c6dd86-registry-certificates\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.435219 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/815bf5be-bf5a-4956-8462-b423e4c6dd86-trusted-ca\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.441943 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/815bf5be-bf5a-4956-8462-b423e4c6dd86-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.442006 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-registry-tls\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.455331 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4xdz\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-kube-api-access-l4xdz\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.462541 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/815bf5be-bf5a-4956-8462-b423e4c6dd86-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rsgl2\" (UID: \"815bf5be-bf5a-4956-8462-b423e4c6dd86\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.559267 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:53 crc kubenswrapper[5004]: I1208 19:03:53.751402 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rsgl2"] Dec 08 19:03:53 crc kubenswrapper[5004]: W1208 19:03:53.758585 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod815bf5be_bf5a_4956_8462_b423e4c6dd86.slice/crio-7ffdc06cdaaa47ff0810d7fbd54bab870217faf7983a56061de01faf20a24943 WatchSource:0}: Error finding container 7ffdc06cdaaa47ff0810d7fbd54bab870217faf7983a56061de01faf20a24943: Status 404 returned error can't find the container with id 7ffdc06cdaaa47ff0810d7fbd54bab870217faf7983a56061de01faf20a24943 Dec 08 19:03:54 crc kubenswrapper[5004]: I1208 19:03:54.718500 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f" path="/var/lib/kubelet/pods/17e88bc3-9ae5-4d4d-af71-7ebbb0b00a4f/volumes" Dec 08 19:03:54 crc kubenswrapper[5004]: I1208 19:03:54.754955 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" event={"ID":"815bf5be-bf5a-4956-8462-b423e4c6dd86","Type":"ContainerStarted","Data":"6f110b2f5f0f8bf24a5b37fcb05dea0c59f92b5872fea73a38484da77330a28d"} Dec 08 19:03:54 crc kubenswrapper[5004]: I1208 19:03:54.755028 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:03:54 crc kubenswrapper[5004]: I1208 19:03:54.755044 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" event={"ID":"815bf5be-bf5a-4956-8462-b423e4c6dd86","Type":"ContainerStarted","Data":"7ffdc06cdaaa47ff0810d7fbd54bab870217faf7983a56061de01faf20a24943"} Dec 08 19:03:54 crc kubenswrapper[5004]: I1208 19:03:54.776869 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" podStartSLOduration=1.776839733 podStartE2EDuration="1.776839733s" podCreationTimestamp="2025-12-08 19:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:03:54.773487088 +0000 UTC m=+768.422395386" watchObservedRunningTime="2025-12-08 19:03:54.776839733 +0000 UTC m=+768.425748041" Dec 08 19:03:56 crc kubenswrapper[5004]: I1208 19:03:56.379542 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:56 crc kubenswrapper[5004]: I1208 19:03:56.379737 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:56 crc kubenswrapper[5004]: I1208 19:03:56.423464 5004 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:56 crc kubenswrapper[5004]: I1208 19:03:56.537007 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g"] Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.206379 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g"] Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.206666 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.211026 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.265133 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.286463 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.286556 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.286576 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhj6\" (UniqueName: \"kubernetes.io/projected/8f40f1c3-cee5-4f19-8112-d4f47134e903-kube-api-access-2qhj6\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.388373 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.388463 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.388506 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2qhj6\" (UniqueName: \"kubernetes.io/projected/8f40f1c3-cee5-4f19-8112-d4f47134e903-kube-api-access-2qhj6\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.388957 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.389039 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.410937 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qhj6\" (UniqueName: \"kubernetes.io/projected/8f40f1c3-cee5-4f19-8112-d4f47134e903-kube-api-access-2qhj6\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.533442 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:03:57 crc kubenswrapper[5004]: I1208 19:03:57.783232 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g"] Dec 08 19:03:57 crc kubenswrapper[5004]: W1208 19:03:57.784032 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f40f1c3_cee5_4f19_8112_d4f47134e903.slice/crio-1d7748bcada237046fa8e23966110c733e2fd4e5615f9edbc0ac6a06cc2b1085 WatchSource:0}: Error finding container 1d7748bcada237046fa8e23966110c733e2fd4e5615f9edbc0ac6a06cc2b1085: Status 404 returned error can't find the container with id 1d7748bcada237046fa8e23966110c733e2fd4e5615f9edbc0ac6a06cc2b1085 Dec 08 19:03:58 crc kubenswrapper[5004]: I1208 19:03:58.783469 5004 generic.go:358] "Generic (PLEG): container finished" podID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerID="6e67b210ba2e219d502f78cfcda995684d875aad75b79f223b8a7636d98c2b75" exitCode=0 Dec 08 19:03:58 crc kubenswrapper[5004]: I1208 19:03:58.783569 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" event={"ID":"8f40f1c3-cee5-4f19-8112-d4f47134e903","Type":"ContainerDied","Data":"6e67b210ba2e219d502f78cfcda995684d875aad75b79f223b8a7636d98c2b75"} Dec 08 19:03:58 crc kubenswrapper[5004]: I1208 19:03:58.783963 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" event={"ID":"8f40f1c3-cee5-4f19-8112-d4f47134e903","Type":"ContainerStarted","Data":"1d7748bcada237046fa8e23966110c733e2fd4e5615f9edbc0ac6a06cc2b1085"} Dec 08 19:04:00 crc kubenswrapper[5004]: I1208 19:04:00.806284 5004 generic.go:358] "Generic (PLEG): container finished" podID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerID="06be115d78914d1dd91e0e907a3c10835803dd3288f05b591291aa9780103b32" exitCode=0 Dec 08 19:04:00 crc kubenswrapper[5004]: I1208 19:04:00.806344 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" event={"ID":"8f40f1c3-cee5-4f19-8112-d4f47134e903","Type":"ContainerDied","Data":"06be115d78914d1dd91e0e907a3c10835803dd3288f05b591291aa9780103b32"} Dec 08 19:04:00 crc kubenswrapper[5004]: I1208 19:04:00.842281 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnqql"] Dec 08 19:04:00 crc kubenswrapper[5004]: I1208 19:04:00.842639 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bnqql" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerName="registry-server" containerID="cri-o://2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1" gracePeriod=2 Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.209887 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.361744 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-utilities\") pod \"8b1c6582-0754-465a-a55c-6e9d968d77e7\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.361807 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-catalog-content\") pod \"8b1c6582-0754-465a-a55c-6e9d968d77e7\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.361956 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6gjq\" (UniqueName: \"kubernetes.io/projected/8b1c6582-0754-465a-a55c-6e9d968d77e7-kube-api-access-t6gjq\") pod \"8b1c6582-0754-465a-a55c-6e9d968d77e7\" (UID: \"8b1c6582-0754-465a-a55c-6e9d968d77e7\") " Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.363118 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-utilities" (OuterVolumeSpecName: "utilities") pod "8b1c6582-0754-465a-a55c-6e9d968d77e7" (UID: "8b1c6582-0754-465a-a55c-6e9d968d77e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.371374 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1c6582-0754-465a-a55c-6e9d968d77e7-kube-api-access-t6gjq" (OuterVolumeSpecName: "kube-api-access-t6gjq") pod "8b1c6582-0754-465a-a55c-6e9d968d77e7" (UID: "8b1c6582-0754-465a-a55c-6e9d968d77e7"). InnerVolumeSpecName "kube-api-access-t6gjq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.415525 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b1c6582-0754-465a-a55c-6e9d968d77e7" (UID: "8b1c6582-0754-465a-a55c-6e9d968d77e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.463967 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t6gjq\" (UniqueName: \"kubernetes.io/projected/8b1c6582-0754-465a-a55c-6e9d968d77e7-kube-api-access-t6gjq\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.464007 5004 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.464018 5004 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1c6582-0754-465a-a55c-6e9d968d77e7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.814212 5004 generic.go:358] "Generic (PLEG): container finished" podID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerID="2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1" exitCode=0 Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.814358 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqql" event={"ID":"8b1c6582-0754-465a-a55c-6e9d968d77e7","Type":"ContainerDied","Data":"2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1"} Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.814395 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnqql" event={"ID":"8b1c6582-0754-465a-a55c-6e9d968d77e7","Type":"ContainerDied","Data":"abe002c8c779ab7c69411f0a6f3bafb7ea6052ffa8e4b0515edb60d8e9108ed4"} Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.814416 5004 scope.go:117] "RemoveContainer" containerID="2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.814433 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnqql" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.817718 5004 generic.go:358] "Generic (PLEG): container finished" podID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerID="79b9e4e37be5d7cd257dba71993e59d9091900f16f3b70442d43134acc9c932e" exitCode=0 Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.817783 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" event={"ID":"8f40f1c3-cee5-4f19-8112-d4f47134e903","Type":"ContainerDied","Data":"79b9e4e37be5d7cd257dba71993e59d9091900f16f3b70442d43134acc9c932e"} Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.834684 5004 scope.go:117] "RemoveContainer" containerID="42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.871960 5004 scope.go:117] "RemoveContainer" containerID="7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.872021 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnqql"] Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.879453 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bnqql"] Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.889976 5004 scope.go:117] "RemoveContainer" containerID="2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1" Dec 08 19:04:01 crc kubenswrapper[5004]: E1208 19:04:01.890656 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1\": container with ID starting with 2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1 not found: ID does not exist" containerID="2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.890691 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1"} err="failed to get container status \"2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1\": rpc error: code = NotFound desc = could not find container \"2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1\": container with ID starting with 2781339d813170bea7c5a437f714a2c00203e5d659872f957b140e5375b0d4a1 not found: ID does not exist" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.890711 5004 scope.go:117] "RemoveContainer" containerID="42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96" Dec 08 19:04:01 crc kubenswrapper[5004]: E1208 19:04:01.891067 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96\": container with ID starting with 42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96 not found: ID does not exist" containerID="42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.891161 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96"} err="failed to get container status \"42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96\": rpc error: code = NotFound desc = could not find container \"42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96\": container with ID starting with 42687f0f213a4eafc499e9a78f676880794ba47eb9d40803b1d206090f0d4e96 not found: ID does not exist" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.891204 5004 scope.go:117] "RemoveContainer" containerID="7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739" Dec 08 19:04:01 crc kubenswrapper[5004]: E1208 19:04:01.892128 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739\": container with ID starting with 7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739 not found: ID does not exist" containerID="7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739" Dec 08 19:04:01 crc kubenswrapper[5004]: I1208 19:04:01.892163 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739"} err="failed to get container status \"7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739\": rpc error: code = NotFound desc = could not find container \"7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739\": container with ID starting with 7b863cf303254ea7538a7eb52e67658db480ce1dc21eb59587fe2ecc7f52d739 not found: ID does not exist" Dec 08 19:04:02 crc kubenswrapper[5004]: I1208 19:04:02.719603 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" path="/var/lib/kubelet/pods/8b1c6582-0754-465a-a55c-6e9d968d77e7/volumes" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.043944 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.130152 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qhj6\" (UniqueName: \"kubernetes.io/projected/8f40f1c3-cee5-4f19-8112-d4f47134e903-kube-api-access-2qhj6\") pod \"8f40f1c3-cee5-4f19-8112-d4f47134e903\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.130220 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-util\") pod \"8f40f1c3-cee5-4f19-8112-d4f47134e903\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.130310 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-bundle\") pod \"8f40f1c3-cee5-4f19-8112-d4f47134e903\" (UID: \"8f40f1c3-cee5-4f19-8112-d4f47134e903\") " Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.133407 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-bundle" (OuterVolumeSpecName: "bundle") pod "8f40f1c3-cee5-4f19-8112-d4f47134e903" (UID: "8f40f1c3-cee5-4f19-8112-d4f47134e903"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.140057 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f40f1c3-cee5-4f19-8112-d4f47134e903-kube-api-access-2qhj6" (OuterVolumeSpecName: "kube-api-access-2qhj6") pod "8f40f1c3-cee5-4f19-8112-d4f47134e903" (UID: "8f40f1c3-cee5-4f19-8112-d4f47134e903"). InnerVolumeSpecName "kube-api-access-2qhj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.144181 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-util" (OuterVolumeSpecName: "util") pod "8f40f1c3-cee5-4f19-8112-d4f47134e903" (UID: "8f40f1c3-cee5-4f19-8112-d4f47134e903"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.231877 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2qhj6\" (UniqueName: \"kubernetes.io/projected/8f40f1c3-cee5-4f19-8112-d4f47134e903-kube-api-access-2qhj6\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.231926 5004 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.231938 5004 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f40f1c3-cee5-4f19-8112-d4f47134e903-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.837857 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" event={"ID":"8f40f1c3-cee5-4f19-8112-d4f47134e903","Type":"ContainerDied","Data":"1d7748bcada237046fa8e23966110c733e2fd4e5615f9edbc0ac6a06cc2b1085"} Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.837895 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7748bcada237046fa8e23966110c733e2fd4e5615f9edbc0ac6a06cc2b1085" Dec 08 19:04:03 crc kubenswrapper[5004]: I1208 19:04:03.837897 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210pt78g" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.109474 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c"] Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110572 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerName="extract" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110590 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerName="extract" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110613 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerName="extract-utilities" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110621 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerName="extract-utilities" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110636 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerName="registry-server" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110645 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerName="registry-server" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110662 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerName="pull" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110669 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerName="pull" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110681 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerName="util" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110688 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerName="util" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110704 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerName="extract-content" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110713 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerName="extract-content" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110820 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="8f40f1c3-cee5-4f19-8112-d4f47134e903" containerName="extract" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.110835 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b1c6582-0754-465a-a55c-6e9d968d77e7" containerName="registry-server" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.209751 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c"] Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.209909 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.212342 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.269293 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2xx\" (UniqueName: \"kubernetes.io/projected/4d655f50-8aa3-4cbd-aac0-df751bd80b39-kube-api-access-8z2xx\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.269390 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.269614 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.371146 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.371254 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.371314 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2xx\" (UniqueName: \"kubernetes.io/projected/4d655f50-8aa3-4cbd-aac0-df751bd80b39-kube-api-access-8z2xx\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.371913 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.372140 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.390995 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2xx\" (UniqueName: \"kubernetes.io/projected/4d655f50-8aa3-4cbd-aac0-df751bd80b39-kube-api-access-8z2xx\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.529865 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.748379 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c"] Dec 08 19:04:06 crc kubenswrapper[5004]: I1208 19:04:06.857210 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" event={"ID":"4d655f50-8aa3-4cbd-aac0-df751bd80b39","Type":"ContainerStarted","Data":"77e582f8fb1d16e9ab86331990a26a305962499ccd3150979ebfcd18936dd547"} Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.135289 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc"] Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.139285 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.150713 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc"] Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.181366 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.181587 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfs6c\" (UniqueName: \"kubernetes.io/projected/78847600-f534-451d-9c0e-6ce1942782c7-kube-api-access-kfs6c\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.181668 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.283126 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.283233 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kfs6c\" (UniqueName: \"kubernetes.io/projected/78847600-f534-451d-9c0e-6ce1942782c7-kube-api-access-kfs6c\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.283271 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.283727 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.284023 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.306409 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfs6c\" (UniqueName: \"kubernetes.io/projected/78847600-f534-451d-9c0e-6ce1942782c7-kube-api-access-kfs6c\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.454321 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.846332 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc"] Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.870978 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" event={"ID":"78847600-f534-451d-9c0e-6ce1942782c7","Type":"ContainerStarted","Data":"235c01cade16fd8e24f27be8c069991fe2f40984785fdf9d57b64cb36e3fe55a"} Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.872597 5004 generic.go:358] "Generic (PLEG): container finished" podID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerID="5086c71a0d3e33dab45ac3c76fbd0b7407b72a6eabd7d52f84c4468ee4c153ab" exitCode=0 Dec 08 19:04:07 crc kubenswrapper[5004]: I1208 19:04:07.872675 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" event={"ID":"4d655f50-8aa3-4cbd-aac0-df751bd80b39","Type":"ContainerDied","Data":"5086c71a0d3e33dab45ac3c76fbd0b7407b72a6eabd7d52f84c4468ee4c153ab"} Dec 08 19:04:08 crc kubenswrapper[5004]: I1208 19:04:08.883683 5004 generic.go:358] "Generic (PLEG): container finished" podID="78847600-f534-451d-9c0e-6ce1942782c7" containerID="0d11592d09395e8759ae67bce86a42325ee7b122a35f96a92a37d20ebb6c3780" exitCode=0 Dec 08 19:04:08 crc kubenswrapper[5004]: I1208 19:04:08.883738 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" event={"ID":"78847600-f534-451d-9c0e-6ce1942782c7","Type":"ContainerDied","Data":"0d11592d09395e8759ae67bce86a42325ee7b122a35f96a92a37d20ebb6c3780"} Dec 08 19:04:09 crc kubenswrapper[5004]: I1208 19:04:09.891876 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" event={"ID":"4d655f50-8aa3-4cbd-aac0-df751bd80b39","Type":"ContainerStarted","Data":"883bcf1e5aad8a4b5953258a42aebd9bc292208f8072c8eed6b09f9947d542df"} Dec 08 19:04:10 crc kubenswrapper[5004]: I1208 19:04:10.980242 5004 generic.go:358] "Generic (PLEG): container finished" podID="78847600-f534-451d-9c0e-6ce1942782c7" containerID="faf7c71bfe2c624a5e08936a6ac823823e1be480ca19ac97ccd9833531b6ee73" exitCode=0 Dec 08 19:04:10 crc kubenswrapper[5004]: I1208 19:04:10.983224 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" event={"ID":"78847600-f534-451d-9c0e-6ce1942782c7","Type":"ContainerDied","Data":"faf7c71bfe2c624a5e08936a6ac823823e1be480ca19ac97ccd9833531b6ee73"} Dec 08 19:04:10 crc kubenswrapper[5004]: I1208 19:04:10.986451 5004 generic.go:358] "Generic (PLEG): container finished" podID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerID="883bcf1e5aad8a4b5953258a42aebd9bc292208f8072c8eed6b09f9947d542df" exitCode=0 Dec 08 19:04:10 crc kubenswrapper[5004]: I1208 19:04:10.986568 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" event={"ID":"4d655f50-8aa3-4cbd-aac0-df751bd80b39","Type":"ContainerDied","Data":"883bcf1e5aad8a4b5953258a42aebd9bc292208f8072c8eed6b09f9947d542df"} Dec 08 19:04:12 crc kubenswrapper[5004]: I1208 19:04:12.002056 5004 generic.go:358] "Generic (PLEG): container finished" podID="78847600-f534-451d-9c0e-6ce1942782c7" containerID="d10d635c317a9fcd703b508177adaa1c8c3d8a952bb9d9a2f1c91e57eb2f09fa" exitCode=0 Dec 08 19:04:12 crc kubenswrapper[5004]: I1208 19:04:12.002272 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" event={"ID":"78847600-f534-451d-9c0e-6ce1942782c7","Type":"ContainerDied","Data":"d10d635c317a9fcd703b508177adaa1c8c3d8a952bb9d9a2f1c91e57eb2f09fa"} Dec 08 19:04:12 crc kubenswrapper[5004]: I1208 19:04:12.005985 5004 generic.go:358] "Generic (PLEG): container finished" podID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerID="4546666a9f605f521ecd27f11317883f251ef4f17e7617d52d27852b08ba353b" exitCode=0 Dec 08 19:04:12 crc kubenswrapper[5004]: I1208 19:04:12.006183 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" event={"ID":"4d655f50-8aa3-4cbd-aac0-df751bd80b39","Type":"ContainerDied","Data":"4546666a9f605f521ecd27f11317883f251ef4f17e7617d52d27852b08ba353b"} Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.711435 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.799892 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z2xx\" (UniqueName: \"kubernetes.io/projected/4d655f50-8aa3-4cbd-aac0-df751bd80b39-kube-api-access-8z2xx\") pod \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.800059 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-bundle\") pod \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.801128 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-util\") pod \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\" (UID: \"4d655f50-8aa3-4cbd-aac0-df751bd80b39\") " Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.800573 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-bundle" (OuterVolumeSpecName: "bundle") pod "4d655f50-8aa3-4cbd-aac0-df751bd80b39" (UID: "4d655f50-8aa3-4cbd-aac0-df751bd80b39"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.801490 5004 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.829677 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d655f50-8aa3-4cbd-aac0-df751bd80b39-kube-api-access-8z2xx" (OuterVolumeSpecName: "kube-api-access-8z2xx") pod "4d655f50-8aa3-4cbd-aac0-df751bd80b39" (UID: "4d655f50-8aa3-4cbd-aac0-df751bd80b39"). InnerVolumeSpecName "kube-api-access-8z2xx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.837819 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-util" (OuterVolumeSpecName: "util") pod "4d655f50-8aa3-4cbd-aac0-df751bd80b39" (UID: "4d655f50-8aa3-4cbd-aac0-df751bd80b39"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.902441 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8z2xx\" (UniqueName: \"kubernetes.io/projected/4d655f50-8aa3-4cbd-aac0-df751bd80b39-kube-api-access-8z2xx\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.902472 5004 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d655f50-8aa3-4cbd-aac0-df751bd80b39-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:13 crc kubenswrapper[5004]: I1208 19:04:13.952374 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.003153 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfs6c\" (UniqueName: \"kubernetes.io/projected/78847600-f534-451d-9c0e-6ce1942782c7-kube-api-access-kfs6c\") pod \"78847600-f534-451d-9c0e-6ce1942782c7\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.003213 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-bundle\") pod \"78847600-f534-451d-9c0e-6ce1942782c7\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.003343 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-util\") pod \"78847600-f534-451d-9c0e-6ce1942782c7\" (UID: \"78847600-f534-451d-9c0e-6ce1942782c7\") " Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.004687 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-bundle" (OuterVolumeSpecName: "bundle") pod "78847600-f534-451d-9c0e-6ce1942782c7" (UID: "78847600-f534-451d-9c0e-6ce1942782c7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.013265 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78847600-f534-451d-9c0e-6ce1942782c7-kube-api-access-kfs6c" (OuterVolumeSpecName: "kube-api-access-kfs6c") pod "78847600-f534-451d-9c0e-6ce1942782c7" (UID: "78847600-f534-451d-9c0e-6ce1942782c7"). InnerVolumeSpecName "kube-api-access-kfs6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.022488 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-util" (OuterVolumeSpecName: "util") pod "78847600-f534-451d-9c0e-6ce1942782c7" (UID: "78847600-f534-451d-9c0e-6ce1942782c7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.035731 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.036026 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etfcbc" event={"ID":"78847600-f534-451d-9c0e-6ce1942782c7","Type":"ContainerDied","Data":"235c01cade16fd8e24f27be8c069991fe2f40984785fdf9d57b64cb36e3fe55a"} Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.036200 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235c01cade16fd8e24f27be8c069991fe2f40984785fdf9d57b64cb36e3fe55a" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.038438 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" event={"ID":"4d655f50-8aa3-4cbd-aac0-df751bd80b39","Type":"ContainerDied","Data":"77e582f8fb1d16e9ab86331990a26a305962499ccd3150979ebfcd18936dd547"} Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.038486 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77e582f8fb1d16e9ab86331990a26a305962499ccd3150979ebfcd18936dd547" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.038598 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fldh2c" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.105622 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kfs6c\" (UniqueName: \"kubernetes.io/projected/78847600-f534-451d-9c0e-6ce1942782c7-kube-api-access-kfs6c\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.105662 5004 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.105679 5004 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78847600-f534-451d-9c0e-6ce1942782c7-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.119687 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk"] Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120297 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerName="pull" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120321 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerName="pull" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120337 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78847600-f534-451d-9c0e-6ce1942782c7" containerName="util" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120345 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="78847600-f534-451d-9c0e-6ce1942782c7" containerName="util" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120377 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerName="util" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120387 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerName="util" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120399 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78847600-f534-451d-9c0e-6ce1942782c7" containerName="pull" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120406 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="78847600-f534-451d-9c0e-6ce1942782c7" containerName="pull" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120415 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerName="extract" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120422 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerName="extract" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120445 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78847600-f534-451d-9c0e-6ce1942782c7" containerName="extract" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120453 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="78847600-f534-451d-9c0e-6ce1942782c7" containerName="extract" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120556 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="4d655f50-8aa3-4cbd-aac0-df751bd80b39" containerName="extract" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.120577 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="78847600-f534-451d-9c0e-6ce1942782c7" containerName="extract" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.586521 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk"] Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.586719 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.590245 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.713204 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.713265 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqqc\" (UniqueName: \"kubernetes.io/projected/801750e2-387d-420c-bc80-678980f794a6-kube-api-access-sxqqc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.713329 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.814774 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.814851 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxqqc\" (UniqueName: \"kubernetes.io/projected/801750e2-387d-420c-bc80-678980f794a6-kube-api-access-sxqqc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.814967 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.815735 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.816550 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.833638 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxqqc\" (UniqueName: \"kubernetes.io/projected/801750e2-387d-420c-bc80-678980f794a6-kube-api-access-sxqqc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:14 crc kubenswrapper[5004]: I1208 19:04:14.971743 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:04:15 crc kubenswrapper[5004]: I1208 19:04:15.740133 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk"] Dec 08 19:04:15 crc kubenswrapper[5004]: I1208 19:04:15.782729 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rsgl2" Dec 08 19:04:15 crc kubenswrapper[5004]: I1208 19:04:15.890481 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-pxbdc"] Dec 08 19:04:16 crc kubenswrapper[5004]: I1208 19:04:16.051853 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" event={"ID":"801750e2-387d-420c-bc80-678980f794a6","Type":"ContainerStarted","Data":"ed79dad165166711c6b565e037ead2af691d53954854fd3f88b79f41848983be"} Dec 08 19:04:16 crc kubenswrapper[5004]: I1208 19:04:16.812748 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-n68fn"] Dec 08 19:04:16 crc kubenswrapper[5004]: I1208 19:04:16.818180 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-n68fn" Dec 08 19:04:16 crc kubenswrapper[5004]: I1208 19:04:16.820269 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 19:04:16 crc kubenswrapper[5004]: I1208 19:04:16.820666 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 19:04:16 crc kubenswrapper[5004]: I1208 19:04:16.820948 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-r2qd7\"" Dec 08 19:04:16 crc kubenswrapper[5004]: I1208 19:04:16.830622 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-n68fn"] Dec 08 19:04:16 crc kubenswrapper[5004]: I1208 19:04:16.955197 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq"] Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.015245 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7hp8\" (UniqueName: \"kubernetes.io/projected/af9c1da9-f756-48cf-828c-f5c468539cf9-kube-api-access-b7hp8\") pod \"obo-prometheus-operator-86648f486b-n68fn\" (UID: \"af9c1da9-f756-48cf-828c-f5c468539cf9\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-n68fn" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.027231 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk"] Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.068627 5004 generic.go:358] "Generic (PLEG): container finished" podID="801750e2-387d-420c-bc80-678980f794a6" containerID="6370f7bf6e2566a6ac535b0aa3ec32095068b78ac323b3fe3eee69a950986ace" exitCode=0 Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.116198 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq\" (UID: \"d523e327-b8ea-446e-9400-f70012eb2e5c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.116295 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq\" (UID: \"d523e327-b8ea-446e-9400-f70012eb2e5c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.116335 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b7hp8\" (UniqueName: \"kubernetes.io/projected/af9c1da9-f756-48cf-828c-f5c468539cf9-kube-api-access-b7hp8\") pod \"obo-prometheus-operator-86648f486b-n68fn\" (UID: \"af9c1da9-f756-48cf-828c-f5c468539cf9\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-n68fn" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.147181 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7hp8\" (UniqueName: \"kubernetes.io/projected/af9c1da9-f756-48cf-828c-f5c468539cf9-kube-api-access-b7hp8\") pod \"obo-prometheus-operator-86648f486b-n68fn\" (UID: \"af9c1da9-f756-48cf-828c-f5c468539cf9\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-n68fn" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.216948 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq\" (UID: \"d523e327-b8ea-446e-9400-f70012eb2e5c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.217050 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq\" (UID: \"d523e327-b8ea-446e-9400-f70012eb2e5c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: E1208 19:04:17.217168 5004 secret.go:189] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: object "openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" not registered Dec 08 19:04:17 crc kubenswrapper[5004]: E1208 19:04:17.217208 5004 secret.go:189] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: object "openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" not registered Dec 08 19:04:17 crc kubenswrapper[5004]: E1208 19:04:17.217253 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-webhook-cert podName:d523e327-b8ea-446e-9400-f70012eb2e5c nodeName:}" failed. No retries permitted until 2025-12-08 19:04:17.717230199 +0000 UTC m=+791.366138507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-webhook-cert") pod "obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" (UID: "d523e327-b8ea-446e-9400-f70012eb2e5c") : object "openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" not registered Dec 08 19:04:17 crc kubenswrapper[5004]: E1208 19:04:17.217292 5004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-apiservice-cert podName:d523e327-b8ea-446e-9400-f70012eb2e5c nodeName:}" failed. No retries permitted until 2025-12-08 19:04:17.7172698 +0000 UTC m=+791.366178108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-apiservice-cert") pod "obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" (UID: "d523e327-b8ea-446e-9400-f70012eb2e5c") : object "openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" not registered Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.232964 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.233013 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" event={"ID":"801750e2-387d-420c-bc80-678980f794a6","Type":"ContainerDied","Data":"6370f7bf6e2566a6ac535b0aa3ec32095068b78ac323b3fe3eee69a950986ace"} Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.233088 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq"] Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.233120 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk"] Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.233148 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-tx5qk"] Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.233364 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.239955 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-tx5qk"] Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.240140 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.285927 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.288108 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.288460 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-qq7hr\"" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.292347 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-2szp4\"" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.318447 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a086b119-04b6-4675-9706-4ce42521bc07-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk\" (UID: \"a086b119-04b6-4675-9706-4ce42521bc07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.318745 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4e56493-8bcf-499b-a8e7-a8c250dfd5e8-observability-operator-tls\") pod \"observability-operator-78c97476f4-tx5qk\" (UID: \"b4e56493-8bcf-499b-a8e7-a8c250dfd5e8\") " pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.318934 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a086b119-04b6-4675-9706-4ce42521bc07-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk\" (UID: \"a086b119-04b6-4675-9706-4ce42521bc07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.319064 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkq8w\" (UniqueName: \"kubernetes.io/projected/b4e56493-8bcf-499b-a8e7-a8c250dfd5e8-kube-api-access-zkq8w\") pod \"observability-operator-78c97476f4-tx5qk\" (UID: \"b4e56493-8bcf-499b-a8e7-a8c250dfd5e8\") " pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.420615 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4e56493-8bcf-499b-a8e7-a8c250dfd5e8-observability-operator-tls\") pod \"observability-operator-78c97476f4-tx5qk\" (UID: \"b4e56493-8bcf-499b-a8e7-a8c250dfd5e8\") " pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.420687 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a086b119-04b6-4675-9706-4ce42521bc07-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk\" (UID: \"a086b119-04b6-4675-9706-4ce42521bc07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.420725 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkq8w\" (UniqueName: \"kubernetes.io/projected/b4e56493-8bcf-499b-a8e7-a8c250dfd5e8-kube-api-access-zkq8w\") pod \"observability-operator-78c97476f4-tx5qk\" (UID: \"b4e56493-8bcf-499b-a8e7-a8c250dfd5e8\") " pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.420779 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a086b119-04b6-4675-9706-4ce42521bc07-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk\" (UID: \"a086b119-04b6-4675-9706-4ce42521bc07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.424645 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a086b119-04b6-4675-9706-4ce42521bc07-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk\" (UID: \"a086b119-04b6-4675-9706-4ce42521bc07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.425303 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a086b119-04b6-4675-9706-4ce42521bc07-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk\" (UID: \"a086b119-04b6-4675-9706-4ce42521bc07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.426353 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4e56493-8bcf-499b-a8e7-a8c250dfd5e8-observability-operator-tls\") pod \"observability-operator-78c97476f4-tx5qk\" (UID: \"b4e56493-8bcf-499b-a8e7-a8c250dfd5e8\") " pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.439921 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-n68fn" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.453446 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkq8w\" (UniqueName: \"kubernetes.io/projected/b4e56493-8bcf-499b-a8e7-a8c250dfd5e8-kube-api-access-zkq8w\") pod \"observability-operator-78c97476f4-tx5qk\" (UID: \"b4e56493-8bcf-499b-a8e7-a8c250dfd5e8\") " pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.458941 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-42xl5"] Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.498542 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-42xl5"] Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.498723 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.506134 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-p9l9s\"" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.521244 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pgjl\" (UniqueName: \"kubernetes.io/projected/033c204a-8dca-441b-b070-e60777553c0e-kube-api-access-9pgjl\") pod \"perses-operator-68bdb49cbf-42xl5\" (UID: \"033c204a-8dca-441b-b070-e60777553c0e\") " pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.521629 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/033c204a-8dca-441b-b070-e60777553c0e-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-42xl5\" (UID: \"033c204a-8dca-441b-b070-e60777553c0e\") " pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.567549 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.599293 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.627121 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/033c204a-8dca-441b-b070-e60777553c0e-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-42xl5\" (UID: \"033c204a-8dca-441b-b070-e60777553c0e\") " pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.627232 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pgjl\" (UniqueName: \"kubernetes.io/projected/033c204a-8dca-441b-b070-e60777553c0e-kube-api-access-9pgjl\") pod \"perses-operator-68bdb49cbf-42xl5\" (UID: \"033c204a-8dca-441b-b070-e60777553c0e\") " pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.628426 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/033c204a-8dca-441b-b070-e60777553c0e-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-42xl5\" (UID: \"033c204a-8dca-441b-b070-e60777553c0e\") " pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.673550 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pgjl\" (UniqueName: \"kubernetes.io/projected/033c204a-8dca-441b-b070-e60777553c0e-kube-api-access-9pgjl\") pod \"perses-operator-68bdb49cbf-42xl5\" (UID: \"033c204a-8dca-441b-b070-e60777553c0e\") " pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.746598 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq\" (UID: \"d523e327-b8ea-446e-9400-f70012eb2e5c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.746930 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq\" (UID: \"d523e327-b8ea-446e-9400-f70012eb2e5c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.756836 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq\" (UID: \"d523e327-b8ea-446e-9400-f70012eb2e5c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.758469 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d523e327-b8ea-446e-9400-f70012eb2e5c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq\" (UID: \"d523e327-b8ea-446e-9400-f70012eb2e5c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.939619 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" Dec 08 19:04:17 crc kubenswrapper[5004]: I1208 19:04:17.940674 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:18 crc kubenswrapper[5004]: I1208 19:04:18.304105 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-n68fn"] Dec 08 19:04:19 crc kubenswrapper[5004]: I1208 19:04:19.174661 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk"] Dec 08 19:04:19 crc kubenswrapper[5004]: W1208 19:04:19.207586 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda086b119_04b6_4675_9706_4ce42521bc07.slice/crio-32fef3f68b6490a7f2547efe6d455f3b4ac948d718301301d4e0bcf13c3f0ba4 WatchSource:0}: Error finding container 32fef3f68b6490a7f2547efe6d455f3b4ac948d718301301d4e0bcf13c3f0ba4: Status 404 returned error can't find the container with id 32fef3f68b6490a7f2547efe6d455f3b4ac948d718301301d4e0bcf13c3f0ba4 Dec 08 19:04:19 crc kubenswrapper[5004]: I1208 19:04:19.229262 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-42xl5"] Dec 08 19:04:19 crc kubenswrapper[5004]: I1208 19:04:19.250098 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" event={"ID":"a086b119-04b6-4675-9706-4ce42521bc07","Type":"ContainerStarted","Data":"32fef3f68b6490a7f2547efe6d455f3b4ac948d718301301d4e0bcf13c3f0ba4"} Dec 08 19:04:19 crc kubenswrapper[5004]: I1208 19:04:19.264459 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-n68fn" event={"ID":"af9c1da9-f756-48cf-828c-f5c468539cf9","Type":"ContainerStarted","Data":"8f6375c46f93d1d532438622e1e072c1e8312627ed7b325c7cd13cf6b41844ad"} Dec 08 19:04:19 crc kubenswrapper[5004]: I1208 19:04:19.319916 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-tx5qk"] Dec 08 19:04:19 crc kubenswrapper[5004]: I1208 19:04:19.652567 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq"] Dec 08 19:04:20 crc kubenswrapper[5004]: I1208 19:04:20.309851 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" event={"ID":"d523e327-b8ea-446e-9400-f70012eb2e5c","Type":"ContainerStarted","Data":"353417b7c747ec8b06eb1729c113490e8016f16e09dc7d3df9b7e40a0d78ef79"} Dec 08 19:04:20 crc kubenswrapper[5004]: I1208 19:04:20.311245 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" event={"ID":"033c204a-8dca-441b-b070-e60777553c0e","Type":"ContainerStarted","Data":"34c838485841149a7554d0a396adf049e8ad6162b210f5a5c01d73677520b0c8"} Dec 08 19:04:20 crc kubenswrapper[5004]: I1208 19:04:20.321693 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-tx5qk" event={"ID":"b4e56493-8bcf-499b-a8e7-a8c250dfd5e8","Type":"ContainerStarted","Data":"ff98c7f70fb7cf277a7090ac318ef9c93ce9eaac8cbca80d7f2be70089442948"} Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.264387 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-76c568c449-qg6tt"] Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.333287 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-76c568c449-qg6tt"] Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.333446 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.339504 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.339815 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.342192 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-2qctn\"" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.342537 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.458550 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-apiservice-cert\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.458719 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-webhook-cert\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.458936 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzr9q\" (UniqueName: \"kubernetes.io/projected/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-kube-api-access-bzr9q\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.560508 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bzr9q\" (UniqueName: \"kubernetes.io/projected/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-kube-api-access-bzr9q\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.560574 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-apiservice-cert\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.560603 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-webhook-cert\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.575097 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-apiservice-cert\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.575892 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-webhook-cert\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.889044 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzr9q\" (UniqueName: \"kubernetes.io/projected/7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce-kube-api-access-bzr9q\") pod \"elastic-operator-76c568c449-qg6tt\" (UID: \"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce\") " pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:21 crc kubenswrapper[5004]: I1208 19:04:21.965225 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-76c568c449-qg6tt" Dec 08 19:04:22 crc kubenswrapper[5004]: I1208 19:04:22.557795 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-76c568c449-qg6tt"] Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.388619 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-76c568c449-qg6tt" event={"ID":"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce","Type":"ContainerStarted","Data":"154134a84e3c41e9221899d791c7790c1fb556caa1ae1715197ab958dc9454b1"} Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.547463 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bnq52"] Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.567052 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bnq52"] Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.567200 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-bnq52" Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.574934 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-rtpp4\"" Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.728691 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qczsn\" (UniqueName: \"kubernetes.io/projected/08e8d7f4-e94f-4974-afa5-d43ad376e7b3-kube-api-access-qczsn\") pod \"interconnect-operator-78b9bd8798-bnq52\" (UID: \"08e8d7f4-e94f-4974-afa5-d43ad376e7b3\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bnq52" Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.830009 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qczsn\" (UniqueName: \"kubernetes.io/projected/08e8d7f4-e94f-4974-afa5-d43ad376e7b3-kube-api-access-qczsn\") pod \"interconnect-operator-78b9bd8798-bnq52\" (UID: \"08e8d7f4-e94f-4974-afa5-d43ad376e7b3\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bnq52" Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.887062 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qczsn\" (UniqueName: \"kubernetes.io/projected/08e8d7f4-e94f-4974-afa5-d43ad376e7b3-kube-api-access-qczsn\") pod \"interconnect-operator-78b9bd8798-bnq52\" (UID: \"08e8d7f4-e94f-4974-afa5-d43ad376e7b3\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bnq52" Dec 08 19:04:23 crc kubenswrapper[5004]: I1208 19:04:23.896469 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-bnq52" Dec 08 19:04:24 crc kubenswrapper[5004]: I1208 19:04:24.791964 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bnq52"] Dec 08 19:04:24 crc kubenswrapper[5004]: W1208 19:04:24.853235 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08e8d7f4_e94f_4974_afa5_d43ad376e7b3.slice/crio-158dd92f9bc204962e7d9a5a1c6d67cd800a0373b584e2b9e6ddcee5a28d9bdc WatchSource:0}: Error finding container 158dd92f9bc204962e7d9a5a1c6d67cd800a0373b584e2b9e6ddcee5a28d9bdc: Status 404 returned error can't find the container with id 158dd92f9bc204962e7d9a5a1c6d67cd800a0373b584e2b9e6ddcee5a28d9bdc Dec 08 19:04:25 crc kubenswrapper[5004]: I1208 19:04:25.648980 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-bnq52" event={"ID":"08e8d7f4-e94f-4974-afa5-d43ad376e7b3","Type":"ContainerStarted","Data":"158dd92f9bc204962e7d9a5a1c6d67cd800a0373b584e2b9e6ddcee5a28d9bdc"} Dec 08 19:04:41 crc kubenswrapper[5004]: I1208 19:04:41.059117 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" podUID="f9a6206e-5e26-43f6-aeeb-48d0c3e30780" containerName="registry" containerID="cri-o://a7d8a8700520e896082fbafec5004aa917b9fc875cbdf664e7727b6a4bbed09e" gracePeriod=30 Dec 08 19:04:41 crc kubenswrapper[5004]: I1208 19:04:41.910059 5004 generic.go:358] "Generic (PLEG): container finished" podID="f9a6206e-5e26-43f6-aeeb-48d0c3e30780" containerID="a7d8a8700520e896082fbafec5004aa917b9fc875cbdf664e7727b6a4bbed09e" exitCode=0 Dec 08 19:04:41 crc kubenswrapper[5004]: I1208 19:04:41.910172 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" event={"ID":"f9a6206e-5e26-43f6-aeeb-48d0c3e30780","Type":"ContainerDied","Data":"a7d8a8700520e896082fbafec5004aa917b9fc875cbdf664e7727b6a4bbed09e"} Dec 08 19:04:54 crc kubenswrapper[5004]: I1208 19:04:54.238180 5004 patch_prober.go:28] interesting pod/image-registry-66587d64c8-pxbdc container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.14:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 19:04:54 crc kubenswrapper[5004]: I1208 19:04:54.238806 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" podUID="f9a6206e-5e26-43f6-aeeb-48d0c3e30780" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.14:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 19:04:57 crc kubenswrapper[5004]: I1208 19:04:57.943126 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.043206 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" event={"ID":"f9a6206e-5e26-43f6-aeeb-48d0c3e30780","Type":"ContainerDied","Data":"c4e491a195c721e3b4f06d37f89306b7e4b8c991eda62683a8c8d9f87174afe9"} Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.043253 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-pxbdc" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.043291 5004 scope.go:117] "RemoveContainer" containerID="a7d8a8700520e896082fbafec5004aa917b9fc875cbdf664e7727b6a4bbed09e" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.072629 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-tls\") pod \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.072748 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-trusted-ca\") pod \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.072826 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-bound-sa-token\") pod \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.072872 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-certificates\") pod \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.073048 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.073121 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws6j5\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-kube-api-access-ws6j5\") pod \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.073195 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-installation-pull-secrets\") pod \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.073224 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-ca-trust-extracted\") pod \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\" (UID: \"f9a6206e-5e26-43f6-aeeb-48d0c3e30780\") " Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.074444 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f9a6206e-5e26-43f6-aeeb-48d0c3e30780" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.074995 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f9a6206e-5e26-43f6-aeeb-48d0c3e30780" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.083934 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-kube-api-access-ws6j5" (OuterVolumeSpecName: "kube-api-access-ws6j5") pod "f9a6206e-5e26-43f6-aeeb-48d0c3e30780" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780"). InnerVolumeSpecName "kube-api-access-ws6j5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.091474 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f9a6206e-5e26-43f6-aeeb-48d0c3e30780" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.092761 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f9a6206e-5e26-43f6-aeeb-48d0c3e30780" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.096036 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f9a6206e-5e26-43f6-aeeb-48d0c3e30780" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.098376 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f9a6206e-5e26-43f6-aeeb-48d0c3e30780" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.099325 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "f9a6206e-5e26-43f6-aeeb-48d0c3e30780" (UID: "f9a6206e-5e26-43f6-aeeb-48d0c3e30780"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.175723 5004 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.175777 5004 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.175789 5004 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.175801 5004 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.175815 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws6j5\" (UniqueName: \"kubernetes.io/projected/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-kube-api-access-ws6j5\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.175832 5004 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.175844 5004 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a6206e-5e26-43f6-aeeb-48d0c3e30780-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.383016 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-pxbdc"] Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.390662 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-pxbdc"] Dec 08 19:04:58 crc kubenswrapper[5004]: E1208 19:04:58.410558 5004 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a6206e_5e26_43f6_aeeb_48d0c3e30780.slice\": RecentStats: unable to find data in memory cache]" Dec 08 19:04:58 crc kubenswrapper[5004]: I1208 19:04:58.721261 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a6206e-5e26-43f6-aeeb-48d0c3e30780" path="/var/lib/kubelet/pods/f9a6206e-5e26-43f6-aeeb-48d0c3e30780/volumes" Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.069348 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-tx5qk" event={"ID":"b4e56493-8bcf-499b-a8e7-a8c250dfd5e8","Type":"ContainerStarted","Data":"d24f80109e694fec01358742d22834918f46c3ef3de11bf93dd85be1a14725ea"} Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.069650 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.071606 5004 patch_prober.go:28] interesting pod/observability-operator-78c97476f4-tx5qk container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/healthz\": dial tcp 10.217.0.50:8081: connect: connection refused" start-of-body= Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.071676 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-78c97476f4-tx5qk" podUID="b4e56493-8bcf-499b-a8e7-a8c250dfd5e8" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.50:8081/healthz\": dial tcp 10.217.0.50:8081: connect: connection refused" Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.073376 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" event={"ID":"d523e327-b8ea-446e-9400-f70012eb2e5c","Type":"ContainerStarted","Data":"54474273531b432234d2ea5dbd76ba39be77185857b42d28cedaa71d9cb34713"} Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.075747 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-76c568c449-qg6tt" event={"ID":"7e22c2ef-0daf-4320-94c1-1f6c08e1f6ce","Type":"ContainerStarted","Data":"adfc7ec01d8aab2cd60b8b1753e110d362c63a602d2d12cc1970f287a40a122b"} Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.077198 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-bnq52" event={"ID":"08e8d7f4-e94f-4974-afa5-d43ad376e7b3","Type":"ContainerStarted","Data":"b5df5579ac0332270801c143b09dcf5b7ff7a44569b58551b644e6529e0905d9"} Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.080560 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" event={"ID":"033c204a-8dca-441b-b070-e60777553c0e","Type":"ContainerStarted","Data":"c1897f4730c8b76c737496aa7487b4129cd6bffef4ad46a013434dc73b63136c"} Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.080683 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.082167 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" event={"ID":"801750e2-387d-420c-bc80-678980f794a6","Type":"ContainerStarted","Data":"7245a8c15271cb1ed7d1f8a949b19dd79f681a6529747d46db4937f45fe029c7"} Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.087382 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" event={"ID":"a086b119-04b6-4675-9706-4ce42521bc07","Type":"ContainerStarted","Data":"b32cfa55704308a70146906f58861cd87a5225e939877d48807848d6c5413e56"} Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.101491 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-tx5qk" podStartSLOduration=2.895607022 podStartE2EDuration="42.101469996s" podCreationTimestamp="2025-12-08 19:04:17 +0000 UTC" firstStartedPulling="2025-12-08 19:04:19.378827507 +0000 UTC m=+793.027735815" lastFinishedPulling="2025-12-08 19:04:58.584690481 +0000 UTC m=+832.233598789" observedRunningTime="2025-12-08 19:04:59.089710917 +0000 UTC m=+832.738619225" watchObservedRunningTime="2025-12-08 19:04:59.101469996 +0000 UTC m=+832.750378304" Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.113503 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" podStartSLOduration=2.754917336 podStartE2EDuration="42.113484212s" podCreationTimestamp="2025-12-08 19:04:17 +0000 UTC" firstStartedPulling="2025-12-08 19:04:19.344818512 +0000 UTC m=+792.993726820" lastFinishedPulling="2025-12-08 19:04:58.703385388 +0000 UTC m=+832.352293696" observedRunningTime="2025-12-08 19:04:59.109710464 +0000 UTC m=+832.758618772" watchObservedRunningTime="2025-12-08 19:04:59.113484212 +0000 UTC m=+832.762392530" Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.156957 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-76c568c449-qg6tt" podStartSLOduration=2.24214633 podStartE2EDuration="38.156939763s" podCreationTimestamp="2025-12-08 19:04:21 +0000 UTC" firstStartedPulling="2025-12-08 19:04:22.670855018 +0000 UTC m=+796.319763336" lastFinishedPulling="2025-12-08 19:04:58.585648461 +0000 UTC m=+832.234556769" observedRunningTime="2025-12-08 19:04:59.152161693 +0000 UTC m=+832.801070011" watchObservedRunningTime="2025-12-08 19:04:59.156939763 +0000 UTC m=+832.805848071" Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.180839 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-9h5jk" podStartSLOduration=3.69697186 podStartE2EDuration="43.180820761s" podCreationTimestamp="2025-12-08 19:04:16 +0000 UTC" firstStartedPulling="2025-12-08 19:04:19.229593423 +0000 UTC m=+792.878501731" lastFinishedPulling="2025-12-08 19:04:58.713442334 +0000 UTC m=+832.362350632" observedRunningTime="2025-12-08 19:04:59.178155508 +0000 UTC m=+832.827063826" watchObservedRunningTime="2025-12-08 19:04:59.180820761 +0000 UTC m=+832.829729069" Dec 08 19:04:59 crc kubenswrapper[5004]: I1208 19:04:59.267425 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79fc5ddff5-f76rq" podStartSLOduration=4.189978742 podStartE2EDuration="43.267410523s" podCreationTimestamp="2025-12-08 19:04:16 +0000 UTC" firstStartedPulling="2025-12-08 19:04:19.675341293 +0000 UTC m=+793.324249611" lastFinishedPulling="2025-12-08 19:04:58.752773084 +0000 UTC m=+832.401681392" observedRunningTime="2025-12-08 19:04:59.226698367 +0000 UTC m=+832.875606675" watchObservedRunningTime="2025-12-08 19:04:59.267410523 +0000 UTC m=+832.916318831" Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.094764 5004 generic.go:358] "Generic (PLEG): container finished" podID="801750e2-387d-420c-bc80-678980f794a6" containerID="7245a8c15271cb1ed7d1f8a949b19dd79f681a6529747d46db4937f45fe029c7" exitCode=0 Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.094915 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" event={"ID":"801750e2-387d-420c-bc80-678980f794a6","Type":"ContainerDied","Data":"7245a8c15271cb1ed7d1f8a949b19dd79f681a6529747d46db4937f45fe029c7"} Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.097482 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-n68fn" event={"ID":"af9c1da9-f756-48cf-828c-f5c468539cf9","Type":"ContainerStarted","Data":"94ab30423c30168e45c806f6f5d44a2670c9343cfeb56c2d61871eaa662a3186"} Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.099940 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-tx5qk" Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.172903 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-bnq52" podStartSLOduration=3.294737471 podStartE2EDuration="37.17288167s" podCreationTimestamp="2025-12-08 19:04:23 +0000 UTC" firstStartedPulling="2025-12-08 19:04:24.857096147 +0000 UTC m=+798.506004455" lastFinishedPulling="2025-12-08 19:04:58.735240346 +0000 UTC m=+832.384148654" observedRunningTime="2025-12-08 19:04:59.290311819 +0000 UTC m=+832.939220147" watchObservedRunningTime="2025-12-08 19:05:00.17288167 +0000 UTC m=+833.821789978" Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.328865 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-n68fn" podStartSLOduration=3.979413566 podStartE2EDuration="44.328840655s" podCreationTimestamp="2025-12-08 19:04:16 +0000 UTC" firstStartedPulling="2025-12-08 19:04:18.353703231 +0000 UTC m=+792.002611539" lastFinishedPulling="2025-12-08 19:04:58.70313032 +0000 UTC m=+832.352038628" observedRunningTime="2025-12-08 19:05:00.326353316 +0000 UTC m=+833.975261624" watchObservedRunningTime="2025-12-08 19:05:00.328840655 +0000 UTC m=+833.977748953" Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.617227 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.618024 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9a6206e-5e26-43f6-aeeb-48d0c3e30780" containerName="registry" Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.618049 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a6206e-5e26-43f6-aeeb-48d0c3e30780" containerName="registry" Dec 08 19:05:00 crc kubenswrapper[5004]: I1208 19:05:00.618197 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="f9a6206e-5e26-43f6-aeeb-48d0c3e30780" containerName="registry" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:00.947651 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.018475 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.018543 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.019134 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.019221 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.019690 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.019865 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.019999 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.020038 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.020620 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.020903 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.021113 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-nvn9z\"" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.026770 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.108894 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.109004 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.109031 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.109118 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.109959 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110020 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c51c767e-5cfe-4539-b0f4-be8d50fe7133-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110041 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110101 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110192 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110285 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110358 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110493 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110561 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110655 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.110736 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.115349 5004 generic.go:358] "Generic (PLEG): container finished" podID="801750e2-387d-420c-bc80-678980f794a6" containerID="a74ed136691d8d36af61975757e681ec2d38675c927dc839ff5020b2009b1c60" exitCode=0 Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.115432 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" event={"ID":"801750e2-387d-420c-bc80-678980f794a6","Type":"ContainerDied","Data":"a74ed136691d8d36af61975757e681ec2d38675c927dc839ff5020b2009b1c60"} Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.211731 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.212253 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.212397 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.212456 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.212955 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.212994 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213014 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213092 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213154 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213237 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c51c767e-5cfe-4539-b0f4-be8d50fe7133-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213256 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213318 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213358 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213401 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213414 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.213442 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.214979 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.215491 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.216394 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.217587 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.218503 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.219035 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.219558 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.225428 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.234618 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.234839 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.235256 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.235948 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.238682 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c51c767e-5cfe-4539-b0f4-be8d50fe7133-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.248523 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c51c767e-5cfe-4539-b0f4-be8d50fe7133-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c51c767e-5cfe-4539-b0f4-be8d50fe7133\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.336575 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:01 crc kubenswrapper[5004]: I1208 19:05:01.872996 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.123433 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c51c767e-5cfe-4539-b0f4-be8d50fe7133","Type":"ContainerStarted","Data":"bd6e41a751d0f8b4514fce082b1db0733f7ae9004d847f9b31b1eea3f1a72ee2"} Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.650161 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.839377 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxqqc\" (UniqueName: \"kubernetes.io/projected/801750e2-387d-420c-bc80-678980f794a6-kube-api-access-sxqqc\") pod \"801750e2-387d-420c-bc80-678980f794a6\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.839511 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-bundle\") pod \"801750e2-387d-420c-bc80-678980f794a6\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.839552 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-util\") pod \"801750e2-387d-420c-bc80-678980f794a6\" (UID: \"801750e2-387d-420c-bc80-678980f794a6\") " Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.841272 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-bundle" (OuterVolumeSpecName: "bundle") pod "801750e2-387d-420c-bc80-678980f794a6" (UID: "801750e2-387d-420c-bc80-678980f794a6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.845351 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801750e2-387d-420c-bc80-678980f794a6-kube-api-access-sxqqc" (OuterVolumeSpecName: "kube-api-access-sxqqc") pod "801750e2-387d-420c-bc80-678980f794a6" (UID: "801750e2-387d-420c-bc80-678980f794a6"). InnerVolumeSpecName "kube-api-access-sxqqc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.852908 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-util" (OuterVolumeSpecName: "util") pod "801750e2-387d-420c-bc80-678980f794a6" (UID: "801750e2-387d-420c-bc80-678980f794a6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.940875 5004 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.940916 5004 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/801750e2-387d-420c-bc80-678980f794a6-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:02 crc kubenswrapper[5004]: I1208 19:05:02.940928 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sxqqc\" (UniqueName: \"kubernetes.io/projected/801750e2-387d-420c-bc80-678980f794a6-kube-api-access-sxqqc\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:03 crc kubenswrapper[5004]: I1208 19:05:03.133945 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" Dec 08 19:05:03 crc kubenswrapper[5004]: I1208 19:05:03.133938 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5gtrk" event={"ID":"801750e2-387d-420c-bc80-678980f794a6","Type":"ContainerDied","Data":"ed79dad165166711c6b565e037ead2af691d53954854fd3f88b79f41848983be"} Dec 08 19:05:03 crc kubenswrapper[5004]: I1208 19:05:03.134800 5004 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed79dad165166711c6b565e037ead2af691d53954854fd3f88b79f41848983be" Dec 08 19:05:10 crc kubenswrapper[5004]: I1208 19:05:10.102370 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-42xl5" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.028487 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2"] Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.029273 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="801750e2-387d-420c-bc80-678980f794a6" containerName="pull" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.029289 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="801750e2-387d-420c-bc80-678980f794a6" containerName="pull" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.029304 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="801750e2-387d-420c-bc80-678980f794a6" containerName="util" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.029310 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="801750e2-387d-420c-bc80-678980f794a6" containerName="util" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.029321 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="801750e2-387d-420c-bc80-678980f794a6" containerName="extract" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.029326 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="801750e2-387d-420c-bc80-678980f794a6" containerName="extract" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.029425 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="801750e2-387d-420c-bc80-678980f794a6" containerName="extract" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.097528 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2"] Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.097676 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.100376 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-4zvmq\"" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.100568 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.104871 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.145063 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e1dfef11-52da-4e96-9fde-6bd24261ab82-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vhtx2\" (UID: \"e1dfef11-52da-4e96-9fde-6bd24261ab82\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.145147 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtn4t\" (UniqueName: \"kubernetes.io/projected/e1dfef11-52da-4e96-9fde-6bd24261ab82-kube-api-access-qtn4t\") pod \"cert-manager-operator-controller-manager-64c74584c4-vhtx2\" (UID: \"e1dfef11-52da-4e96-9fde-6bd24261ab82\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.246766 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e1dfef11-52da-4e96-9fde-6bd24261ab82-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vhtx2\" (UID: \"e1dfef11-52da-4e96-9fde-6bd24261ab82\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.246827 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtn4t\" (UniqueName: \"kubernetes.io/projected/e1dfef11-52da-4e96-9fde-6bd24261ab82-kube-api-access-qtn4t\") pod \"cert-manager-operator-controller-manager-64c74584c4-vhtx2\" (UID: \"e1dfef11-52da-4e96-9fde-6bd24261ab82\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.247805 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e1dfef11-52da-4e96-9fde-6bd24261ab82-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vhtx2\" (UID: \"e1dfef11-52da-4e96-9fde-6bd24261ab82\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.290222 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtn4t\" (UniqueName: \"kubernetes.io/projected/e1dfef11-52da-4e96-9fde-6bd24261ab82-kube-api-access-qtn4t\") pod \"cert-manager-operator-controller-manager-64c74584c4-vhtx2\" (UID: \"e1dfef11-52da-4e96-9fde-6bd24261ab82\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" Dec 08 19:05:11 crc kubenswrapper[5004]: I1208 19:05:11.417730 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.006025 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.088304 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.088516 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.096353 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.096561 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.104203 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hr5sq\"" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.130265 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184342 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184414 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184451 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92x2\" (UniqueName: \"kubernetes.io/projected/c00564f8-8e79-4df8-9598-38a3da2ff3c8-kube-api-access-v92x2\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184550 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184660 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184721 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184776 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184838 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.184928 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.185004 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.185104 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.185150 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.285860 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.285912 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.285933 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.285948 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v92x2\" (UniqueName: \"kubernetes.io/projected/c00564f8-8e79-4df8-9598-38a3da2ff3c8-kube-api-access-v92x2\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286481 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286548 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286584 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286623 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286673 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286730 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286778 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286852 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.286933 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.287231 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.287259 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.287382 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.287971 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.288089 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.288143 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.288284 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.288744 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.294540 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.314937 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92x2\" (UniqueName: \"kubernetes.io/projected/c00564f8-8e79-4df8-9598-38a3da2ff3c8-kube-api-access-v92x2\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.321008 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:14 crc kubenswrapper[5004]: I1208 19:05:14.426399 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:15 crc kubenswrapper[5004]: I1208 19:05:15.127420 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2"] Dec 08 19:05:15 crc kubenswrapper[5004]: W1208 19:05:15.148271 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1dfef11_52da_4e96_9fde_6bd24261ab82.slice/crio-81b08db4378b27ce9140f3efd9d65abb7540d109ad6023cae6a39f1631970a7b WatchSource:0}: Error finding container 81b08db4378b27ce9140f3efd9d65abb7540d109ad6023cae6a39f1631970a7b: Status 404 returned error can't find the container with id 81b08db4378b27ce9140f3efd9d65abb7540d109ad6023cae6a39f1631970a7b Dec 08 19:05:15 crc kubenswrapper[5004]: I1208 19:05:15.234466 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:05:15 crc kubenswrapper[5004]: I1208 19:05:15.247930 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" event={"ID":"e1dfef11-52da-4e96-9fde-6bd24261ab82","Type":"ContainerStarted","Data":"81b08db4378b27ce9140f3efd9d65abb7540d109ad6023cae6a39f1631970a7b"} Dec 08 19:05:16 crc kubenswrapper[5004]: I1208 19:05:16.265256 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c00564f8-8e79-4df8-9598-38a3da2ff3c8","Type":"ContainerStarted","Data":"e50b03f5e6bbf83825564e53d10e7a6f32a349e7b1db8c36d79db74f40e0f58c"} Dec 08 19:05:23 crc kubenswrapper[5004]: I1208 19:05:23.685456 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.325463 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.332536 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.339505 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.339604 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.339524 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.348119 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.370870 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.370930 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcx9p\" (UniqueName: \"kubernetes.io/projected/5b5787d0-f9a3-4665-a004-a0907cea5274-kube-api-access-rcx9p\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.370973 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.370996 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.371021 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.371041 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.371093 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.371141 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.371163 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.371209 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.371248 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.371270 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.472902 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.472947 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.472974 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.472997 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.473029 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.473091 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.473118 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.473161 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.473202 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.473228 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.473298 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.473341 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rcx9p\" (UniqueName: \"kubernetes.io/projected/5b5787d0-f9a3-4665-a004-a0907cea5274-kube-api-access-rcx9p\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.474120 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.474262 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.474909 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.475692 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.476756 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.476981 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.477393 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.477716 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.477776 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.481327 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.500720 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcx9p\" (UniqueName: \"kubernetes.io/projected/5b5787d0-f9a3-4665-a004-a0907cea5274-kube-api-access-rcx9p\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.503404 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-2-build\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:25 crc kubenswrapper[5004]: I1208 19:05:25.662697 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:30 crc kubenswrapper[5004]: I1208 19:05:30.626432 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:05:30 crc kubenswrapper[5004]: I1208 19:05:30.636110 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5b5787d0-f9a3-4665-a004-a0907cea5274","Type":"ContainerStarted","Data":"8c03e74ff16da3d7105280972844e490ddb3e43338889db3db0ea67dadbb0398"} Dec 08 19:05:30 crc kubenswrapper[5004]: I1208 19:05:30.638183 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c51c767e-5cfe-4539-b0f4-be8d50fe7133","Type":"ContainerStarted","Data":"a5f0760a344f6e19bd34964c164c5de07beb9ae93ac7db12417f318d23e05e6d"} Dec 08 19:05:30 crc kubenswrapper[5004]: I1208 19:05:30.650649 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" event={"ID":"e1dfef11-52da-4e96-9fde-6bd24261ab82","Type":"ContainerStarted","Data":"1d7fd34bb6692be25e48893e518733ddec66f5fe0731a43d2f68d0130bb40243"} Dec 08 19:05:30 crc kubenswrapper[5004]: I1208 19:05:30.688290 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vhtx2" podStartSLOduration=5.16521673 podStartE2EDuration="19.68827539s" podCreationTimestamp="2025-12-08 19:05:11 +0000 UTC" firstStartedPulling="2025-12-08 19:05:15.168056631 +0000 UTC m=+848.816964939" lastFinishedPulling="2025-12-08 19:05:29.691115291 +0000 UTC m=+863.340023599" observedRunningTime="2025-12-08 19:05:30.681554315 +0000 UTC m=+864.330462633" watchObservedRunningTime="2025-12-08 19:05:30.68827539 +0000 UTC m=+864.337183698" Dec 08 19:05:30 crc kubenswrapper[5004]: I1208 19:05:30.856751 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:05:30 crc kubenswrapper[5004]: I1208 19:05:30.986918 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:05:31 crc kubenswrapper[5004]: I1208 19:05:31.001482 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:05:31 crc kubenswrapper[5004]: I1208 19:05:31.001545 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:05:31 crc kubenswrapper[5004]: I1208 19:05:31.669677 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5b5787d0-f9a3-4665-a004-a0907cea5274","Type":"ContainerStarted","Data":"bcebd0d153e82a8c12c79e618d4dd7f559211a7906dc425a14e42358c4fde2e7"} Dec 08 19:05:31 crc kubenswrapper[5004]: I1208 19:05:31.671747 5004 generic.go:358] "Generic (PLEG): container finished" podID="c51c767e-5cfe-4539-b0f4-be8d50fe7133" containerID="a5f0760a344f6e19bd34964c164c5de07beb9ae93ac7db12417f318d23e05e6d" exitCode=0 Dec 08 19:05:31 crc kubenswrapper[5004]: I1208 19:05:31.671859 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c51c767e-5cfe-4539-b0f4-be8d50fe7133","Type":"ContainerDied","Data":"a5f0760a344f6e19bd34964c164c5de07beb9ae93ac7db12417f318d23e05e6d"} Dec 08 19:05:31 crc kubenswrapper[5004]: I1208 19:05:31.675519 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="c00564f8-8e79-4df8-9598-38a3da2ff3c8" containerName="manage-dockerfile" containerID="cri-o://b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a" gracePeriod=30 Dec 08 19:05:31 crc kubenswrapper[5004]: I1208 19:05:31.675676 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c00564f8-8e79-4df8-9598-38a3da2ff3c8","Type":"ContainerStarted","Data":"b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a"} Dec 08 19:05:31 crc kubenswrapper[5004]: I1208 19:05:31.854213 5004 ???:1] "http: TLS handshake error from 192.168.126.11:56574: no serving certificate available for the kubelet" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.220486 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_c00564f8-8e79-4df8-9598-38a3da2ff3c8/manage-dockerfile/0.log" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.221063 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256376 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-blob-cache\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256421 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-pull\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256437 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildcachedir\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256476 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildworkdir\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256492 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v92x2\" (UniqueName: \"kubernetes.io/projected/c00564f8-8e79-4df8-9598-38a3da2ff3c8-kube-api-access-v92x2\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256550 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-push\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256587 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-proxy-ca-bundles\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256608 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-root\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256630 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-run\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256658 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-node-pullsecrets\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256724 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-system-configs\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.256758 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-ca-bundles\") pod \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\" (UID: \"c00564f8-8e79-4df8-9598-38a3da2ff3c8\") " Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.257223 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.257592 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.257629 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.257605 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.257756 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.257810 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.257826 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.258917 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.259154 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.266006 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00564f8-8e79-4df8-9598-38a3da2ff3c8-kube-api-access-v92x2" (OuterVolumeSpecName: "kube-api-access-v92x2") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "kube-api-access-v92x2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.266302 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-push" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-push") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "builder-dockercfg-hr5sq-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.271240 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-pull" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-pull") pod "c00564f8-8e79-4df8-9598-38a3da2ff3c8" (UID: "c00564f8-8e79-4df8-9598-38a3da2ff3c8"). InnerVolumeSpecName "builder-dockercfg-hr5sq-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359124 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359186 5004 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359208 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359241 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359264 5004 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359284 5004 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359304 5004 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359323 5004 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359342 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/c00564f8-8e79-4df8-9598-38a3da2ff3c8-builder-dockercfg-hr5sq-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359362 5004 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359385 5004 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c00564f8-8e79-4df8-9598-38a3da2ff3c8-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.359405 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v92x2\" (UniqueName: \"kubernetes.io/projected/c00564f8-8e79-4df8-9598-38a3da2ff3c8-kube-api-access-v92x2\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.682608 5004 generic.go:358] "Generic (PLEG): container finished" podID="c51c767e-5cfe-4539-b0f4-be8d50fe7133" containerID="1d9b330bfc414c9ab19e86aa8f411337f60741270e2c14c5ccbc11dc96174b0a" exitCode=0 Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.682924 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c51c767e-5cfe-4539-b0f4-be8d50fe7133","Type":"ContainerDied","Data":"1d9b330bfc414c9ab19e86aa8f411337f60741270e2c14c5ccbc11dc96174b0a"} Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.685364 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_c00564f8-8e79-4df8-9598-38a3da2ff3c8/manage-dockerfile/0.log" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.685398 5004 generic.go:358] "Generic (PLEG): container finished" podID="c00564f8-8e79-4df8-9598-38a3da2ff3c8" containerID="b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a" exitCode=1 Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.686746 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.687259 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c00564f8-8e79-4df8-9598-38a3da2ff3c8","Type":"ContainerDied","Data":"b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a"} Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.687418 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c00564f8-8e79-4df8-9598-38a3da2ff3c8","Type":"ContainerDied","Data":"e50b03f5e6bbf83825564e53d10e7a6f32a349e7b1db8c36d79db74f40e0f58c"} Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.687448 5004 scope.go:117] "RemoveContainer" containerID="b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.708631 5004 scope.go:117] "RemoveContainer" containerID="b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a" Dec 08 19:05:32 crc kubenswrapper[5004]: E1208 19:05:32.709060 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a\": container with ID starting with b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a not found: ID does not exist" containerID="b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.709178 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a"} err="failed to get container status \"b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a\": rpc error: code = NotFound desc = could not find container \"b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a\": container with ID starting with b9790fb5bae5472fa8b087292a90ee23d1b317013431f5f53386c9ef2997ee0a not found: ID does not exist" Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.777154 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:05:32 crc kubenswrapper[5004]: I1208 19:05:32.783111 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:05:33 crc kubenswrapper[5004]: I1208 19:05:33.576091 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:05:33 crc kubenswrapper[5004]: I1208 19:05:33.692298 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="5b5787d0-f9a3-4665-a004-a0907cea5274" containerName="git-clone" containerID="cri-o://bcebd0d153e82a8c12c79e618d4dd7f559211a7906dc425a14e42358c4fde2e7" gracePeriod=30 Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.002950 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-95tr5"] Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.003611 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c00564f8-8e79-4df8-9598-38a3da2ff3c8" containerName="manage-dockerfile" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.003633 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00564f8-8e79-4df8-9598-38a3da2ff3c8" containerName="manage-dockerfile" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.003754 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="c00564f8-8e79-4df8-9598-38a3da2ff3c8" containerName="manage-dockerfile" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.492338 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-95tr5"] Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.492496 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.496481 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-s582r\"" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.496621 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.502314 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.589613 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aafeb1cf-f29e-4e10-8697-5b0a06f1c7be-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-95tr5\" (UID: \"aafeb1cf-f29e-4e10-8697-5b0a06f1c7be\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.589960 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szg9v\" (UniqueName: \"kubernetes.io/projected/aafeb1cf-f29e-4e10-8697-5b0a06f1c7be-kube-api-access-szg9v\") pod \"cert-manager-webhook-7894b5b9b4-95tr5\" (UID: \"aafeb1cf-f29e-4e10-8697-5b0a06f1c7be\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.691321 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aafeb1cf-f29e-4e10-8697-5b0a06f1c7be-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-95tr5\" (UID: \"aafeb1cf-f29e-4e10-8697-5b0a06f1c7be\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.691423 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-szg9v\" (UniqueName: \"kubernetes.io/projected/aafeb1cf-f29e-4e10-8697-5b0a06f1c7be-kube-api-access-szg9v\") pod \"cert-manager-webhook-7894b5b9b4-95tr5\" (UID: \"aafeb1cf-f29e-4e10-8697-5b0a06f1c7be\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.698431 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_5b5787d0-f9a3-4665-a004-a0907cea5274/git-clone/0.log" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.698472 5004 generic.go:358] "Generic (PLEG): container finished" podID="5b5787d0-f9a3-4665-a004-a0907cea5274" containerID="bcebd0d153e82a8c12c79e618d4dd7f559211a7906dc425a14e42358c4fde2e7" exitCode=1 Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.698613 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5b5787d0-f9a3-4665-a004-a0907cea5274","Type":"ContainerDied","Data":"bcebd0d153e82a8c12c79e618d4dd7f559211a7906dc425a14e42358c4fde2e7"} Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.700233 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c51c767e-5cfe-4539-b0f4-be8d50fe7133","Type":"ContainerStarted","Data":"91e7f5225b69ebfa6d520601e17182a3491cddbb00d8ee745bb20ddd988968c0"} Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.701383 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.712703 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-szg9v\" (UniqueName: \"kubernetes.io/projected/aafeb1cf-f29e-4e10-8697-5b0a06f1c7be-kube-api-access-szg9v\") pod \"cert-manager-webhook-7894b5b9b4-95tr5\" (UID: \"aafeb1cf-f29e-4e10-8697-5b0a06f1c7be\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.717444 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aafeb1cf-f29e-4e10-8697-5b0a06f1c7be-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-95tr5\" (UID: \"aafeb1cf-f29e-4e10-8697-5b0a06f1c7be\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.718518 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00564f8-8e79-4df8-9598-38a3da2ff3c8" path="/var/lib/kubelet/pods/c00564f8-8e79-4df8-9598-38a3da2ff3c8/volumes" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.741897 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.7687312649999996 podStartE2EDuration="34.741881119s" podCreationTimestamp="2025-12-08 19:05:00 +0000 UTC" firstStartedPulling="2025-12-08 19:05:01.884228606 +0000 UTC m=+835.533136914" lastFinishedPulling="2025-12-08 19:05:29.85737846 +0000 UTC m=+863.506286768" observedRunningTime="2025-12-08 19:05:34.736429463 +0000 UTC m=+868.385337771" watchObservedRunningTime="2025-12-08 19:05:34.741881119 +0000 UTC m=+868.390789417" Dec 08 19:05:34 crc kubenswrapper[5004]: I1208 19:05:34.853346 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.131133 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-95tr5"] Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.308017 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_5b5787d0-f9a3-4665-a004-a0907cea5274/git-clone/0.log" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.308128 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.401909 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-build-blob-cache\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.401976 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-proxy-ca-bundles\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402006 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-push\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402036 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-run\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402101 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-root\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402125 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-node-pullsecrets\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402163 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-pull\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402192 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-buildworkdir\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402250 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-ca-bundles\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402298 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-buildcachedir\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402336 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-system-configs\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402371 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcx9p\" (UniqueName: \"kubernetes.io/projected/5b5787d0-f9a3-4665-a004-a0907cea5274-kube-api-access-rcx9p\") pod \"5b5787d0-f9a3-4665-a004-a0907cea5274\" (UID: \"5b5787d0-f9a3-4665-a004-a0907cea5274\") " Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.402838 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.403395 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.403541 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.403559 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.404000 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.404012 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.407366 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.407747 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.408252 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.416294 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5787d0-f9a3-4665-a004-a0907cea5274-kube-api-access-rcx9p" (OuterVolumeSpecName: "kube-api-access-rcx9p") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "kube-api-access-rcx9p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.422516 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-pull" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-pull") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "builder-dockercfg-hr5sq-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.422544 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-push" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-push") pod "5b5787d0-f9a3-4665-a004-a0907cea5274" (UID: "5b5787d0-f9a3-4665-a004-a0907cea5274"). InnerVolumeSpecName "builder-dockercfg-hr5sq-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504147 5004 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504192 5004 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504206 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504217 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504228 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504239 5004 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504249 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/5b5787d0-f9a3-4665-a004-a0907cea5274-builder-dockercfg-hr5sq-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504260 5004 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5b5787d0-f9a3-4665-a004-a0907cea5274-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504272 5004 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504284 5004 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5b5787d0-f9a3-4665-a004-a0907cea5274-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504295 5004 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5b5787d0-f9a3-4665-a004-a0907cea5274-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.504305 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rcx9p\" (UniqueName: \"kubernetes.io/projected/5b5787d0-f9a3-4665-a004-a0907cea5274-kube-api-access-rcx9p\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.707987 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" event={"ID":"aafeb1cf-f29e-4e10-8697-5b0a06f1c7be","Type":"ContainerStarted","Data":"219b1c18f8f67a8b1bfbbd24516e45521ca616fed8461f7c1311086e6beb67d9"} Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.710276 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_5b5787d0-f9a3-4665-a004-a0907cea5274/git-clone/0.log" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.711022 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.711200 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5b5787d0-f9a3-4665-a004-a0907cea5274","Type":"ContainerDied","Data":"8c03e74ff16da3d7105280972844e490ddb3e43338889db3db0ea67dadbb0398"} Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.711311 5004 scope.go:117] "RemoveContainer" containerID="bcebd0d153e82a8c12c79e618d4dd7f559211a7906dc425a14e42358c4fde2e7" Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.757703 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:05:35 crc kubenswrapper[5004]: I1208 19:05:35.768346 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:05:36 crc kubenswrapper[5004]: I1208 19:05:36.720699 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5787d0-f9a3-4665-a004-a0907cea5274" path="/var/lib/kubelet/pods/5b5787d0-f9a3-4665-a004-a0907cea5274/volumes" Dec 08 19:05:37 crc kubenswrapper[5004]: I1208 19:05:37.983506 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr"] Dec 08 19:05:37 crc kubenswrapper[5004]: I1208 19:05:37.984443 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b5787d0-f9a3-4665-a004-a0907cea5274" containerName="git-clone" Dec 08 19:05:37 crc kubenswrapper[5004]: I1208 19:05:37.984463 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5787d0-f9a3-4665-a004-a0907cea5274" containerName="git-clone" Dec 08 19:05:37 crc kubenswrapper[5004]: I1208 19:05:37.984557 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="5b5787d0-f9a3-4665-a004-a0907cea5274" containerName="git-clone" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.432561 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr"] Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.432715 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.437053 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-fbpv4\"" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.554989 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7rn\" (UniqueName: \"kubernetes.io/projected/5a6e3708-c6e1-4eac-93a1-b1ab01c83839-kube-api-access-fv7rn\") pod \"cert-manager-cainjector-7dbf76d5c8-tcgxr\" (UID: \"5a6e3708-c6e1-4eac-93a1-b1ab01c83839\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.556213 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a6e3708-c6e1-4eac-93a1-b1ab01c83839-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tcgxr\" (UID: \"5a6e3708-c6e1-4eac-93a1-b1ab01c83839\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.658054 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7rn\" (UniqueName: \"kubernetes.io/projected/5a6e3708-c6e1-4eac-93a1-b1ab01c83839-kube-api-access-fv7rn\") pod \"cert-manager-cainjector-7dbf76d5c8-tcgxr\" (UID: \"5a6e3708-c6e1-4eac-93a1-b1ab01c83839\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.658184 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a6e3708-c6e1-4eac-93a1-b1ab01c83839-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tcgxr\" (UID: \"5a6e3708-c6e1-4eac-93a1-b1ab01c83839\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.692955 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a6e3708-c6e1-4eac-93a1-b1ab01c83839-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-tcgxr\" (UID: \"5a6e3708-c6e1-4eac-93a1-b1ab01c83839\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.696798 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7rn\" (UniqueName: \"kubernetes.io/projected/5a6e3708-c6e1-4eac-93a1-b1ab01c83839-kube-api-access-fv7rn\") pod \"cert-manager-cainjector-7dbf76d5c8-tcgxr\" (UID: \"5a6e3708-c6e1-4eac-93a1-b1ab01c83839\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" Dec 08 19:05:38 crc kubenswrapper[5004]: I1208 19:05:38.750055 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" Dec 08 19:05:39 crc kubenswrapper[5004]: I1208 19:05:39.268213 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr"] Dec 08 19:05:44 crc kubenswrapper[5004]: I1208 19:05:44.394061 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.732904 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.733232 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.735809 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.737258 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hr5sq\"" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.737505 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.739931 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884457 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884500 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr2xk\" (UniqueName: \"kubernetes.io/projected/298ccd50-a2d9-43d0-9057-826621792577-kube-api-access-dr2xk\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884521 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884615 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884679 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884702 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884718 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884767 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884876 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884893 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884954 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.884998 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.992498 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.993233 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dr2xk\" (UniqueName: \"kubernetes.io/projected/298ccd50-a2d9-43d0-9057-826621792577-kube-api-access-dr2xk\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.993252 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.993185 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.993323 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.993389 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.993822 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994013 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994043 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994060 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994094 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994160 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994176 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994206 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994228 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994419 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994527 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.994970 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.995025 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.995200 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:45 crc kubenswrapper[5004]: I1208 19:05:45.995471 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:46 crc kubenswrapper[5004]: I1208 19:05:46.005871 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:46 crc kubenswrapper[5004]: I1208 19:05:46.011541 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:46 crc kubenswrapper[5004]: I1208 19:05:46.021831 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr2xk\" (UniqueName: \"kubernetes.io/projected/298ccd50-a2d9-43d0-9057-826621792577-kube-api-access-dr2xk\") pod \"service-telemetry-operator-3-build\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:46 crc kubenswrapper[5004]: I1208 19:05:46.064722 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:46 crc kubenswrapper[5004]: I1208 19:05:46.846625 5004 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="c51c767e-5cfe-4539-b0f4-be8d50fe7133" containerName="elasticsearch" probeResult="failure" output=< Dec 08 19:05:46 crc kubenswrapper[5004]: {"timestamp": "2025-12-08T19:05:46+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 19:05:46 crc kubenswrapper[5004]: > Dec 08 19:05:47 crc kubenswrapper[5004]: W1208 19:05:47.909466 5004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a6e3708_c6e1_4eac_93a1_b1ab01c83839.slice/crio-f21bf6a21ce777c851380699edcf2f8deb1964df624e77c21a9aaa057819ca94 WatchSource:0}: Error finding container f21bf6a21ce777c851380699edcf2f8deb1964df624e77c21a9aaa057819ca94: Status 404 returned error can't find the container with id f21bf6a21ce777c851380699edcf2f8deb1964df624e77c21a9aaa057819ca94 Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.220493 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.825637 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" event={"ID":"aafeb1cf-f29e-4e10-8697-5b0a06f1c7be","Type":"ContainerStarted","Data":"39ed744cefe79e0b064ea157348e483ba996c0b26a1ecd1ffe3797e32cca3b2a"} Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.825804 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.827517 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"298ccd50-a2d9-43d0-9057-826621792577","Type":"ContainerStarted","Data":"75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7"} Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.827549 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"298ccd50-a2d9-43d0-9057-826621792577","Type":"ContainerStarted","Data":"ab3bb63b99a8318ba01563ec466530f50761b4a7c4d75dccdd4b3aa47b0682e2"} Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.831726 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" event={"ID":"5a6e3708-c6e1-4eac-93a1-b1ab01c83839","Type":"ContainerStarted","Data":"797d88a30d906411872c61ef1730e0aa23af07a37f04481ef1bb8c147818c731"} Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.831771 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" event={"ID":"5a6e3708-c6e1-4eac-93a1-b1ab01c83839","Type":"ContainerStarted","Data":"f21bf6a21ce777c851380699edcf2f8deb1964df624e77c21a9aaa057819ca94"} Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.878214 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" podStartSLOduration=3.005414437 podStartE2EDuration="15.878194007s" podCreationTimestamp="2025-12-08 19:05:33 +0000 UTC" firstStartedPulling="2025-12-08 19:05:35.151575709 +0000 UTC m=+868.800484017" lastFinishedPulling="2025-12-08 19:05:48.024355279 +0000 UTC m=+881.673263587" observedRunningTime="2025-12-08 19:05:48.84717631 +0000 UTC m=+882.496084618" watchObservedRunningTime="2025-12-08 19:05:48.878194007 +0000 UTC m=+882.527102315" Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.894471 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-tcgxr" podStartSLOduration=11.23207365 podStartE2EDuration="11.89445399s" podCreationTimestamp="2025-12-08 19:05:37 +0000 UTC" firstStartedPulling="2025-12-08 19:05:47.916021533 +0000 UTC m=+881.564942711" lastFinishedPulling="2025-12-08 19:05:48.578414743 +0000 UTC m=+882.227323051" observedRunningTime="2025-12-08 19:05:48.892681674 +0000 UTC m=+882.541589992" watchObservedRunningTime="2025-12-08 19:05:48.89445399 +0000 UTC m=+882.543362298" Dec 08 19:05:48 crc kubenswrapper[5004]: I1208 19:05:48.908686 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51166: no serving certificate available for the kubelet" Dec 08 19:05:49 crc kubenswrapper[5004]: I1208 19:05:49.957475 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:05:50 crc kubenswrapper[5004]: I1208 19:05:50.844698 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="298ccd50-a2d9-43d0-9057-826621792577" containerName="git-clone" containerID="cri-o://75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7" gracePeriod=30 Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.299667 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_298ccd50-a2d9-43d0-9057-826621792577/git-clone/0.log" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.299976 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.384888 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-node-pullsecrets\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.384950 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-run\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.384992 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr2xk\" (UniqueName: \"kubernetes.io/projected/298ccd50-a2d9-43d0-9057-826621792577-kube-api-access-dr2xk\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385027 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-pull\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385029 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385050 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-root\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385147 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-push\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385248 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-system-configs\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385269 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-proxy-ca-bundles\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385342 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-ca-bundles\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385387 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-build-blob-cache\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385412 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-buildworkdir\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385437 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-buildcachedir\") pod \"298ccd50-a2d9-43d0-9057-826621792577\" (UID: \"298ccd50-a2d9-43d0-9057-826621792577\") " Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385705 5004 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.385742 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.387363 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.390722 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.390918 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.390950 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.391104 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.391286 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.391638 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.393653 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298ccd50-a2d9-43d0-9057-826621792577-kube-api-access-dr2xk" (OuterVolumeSpecName: "kube-api-access-dr2xk") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "kube-api-access-dr2xk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.400428 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-push" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-push") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "builder-dockercfg-hr5sq-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.401392 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-pull" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-pull") pod "298ccd50-a2d9-43d0-9057-826621792577" (UID: "298ccd50-a2d9-43d0-9057-826621792577"). InnerVolumeSpecName "builder-dockercfg-hr5sq-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487342 5004 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487378 5004 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/298ccd50-a2d9-43d0-9057-826621792577-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487389 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487398 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dr2xk\" (UniqueName: \"kubernetes.io/projected/298ccd50-a2d9-43d0-9057-826621792577-kube-api-access-dr2xk\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487408 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487416 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487423 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/298ccd50-a2d9-43d0-9057-826621792577-builder-dockercfg-hr5sq-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487432 5004 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487440 5004 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487449 5004 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/298ccd50-a2d9-43d0-9057-826621792577-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.487457 5004 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/298ccd50-a2d9-43d0-9057-826621792577-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.852670 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_298ccd50-a2d9-43d0-9057-826621792577/git-clone/0.log" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.852725 5004 generic.go:358] "Generic (PLEG): container finished" podID="298ccd50-a2d9-43d0-9057-826621792577" containerID="75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7" exitCode=1 Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.852890 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.852881 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"298ccd50-a2d9-43d0-9057-826621792577","Type":"ContainerDied","Data":"75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7"} Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.853004 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"298ccd50-a2d9-43d0-9057-826621792577","Type":"ContainerDied","Data":"ab3bb63b99a8318ba01563ec466530f50761b4a7c4d75dccdd4b3aa47b0682e2"} Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.853022 5004 scope.go:117] "RemoveContainer" containerID="75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.884661 5004 scope.go:117] "RemoveContainer" containerID="75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7" Dec 08 19:05:51 crc kubenswrapper[5004]: E1208 19:05:51.888305 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7\": container with ID starting with 75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7 not found: ID does not exist" containerID="75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.888359 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7"} err="failed to get container status \"75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7\": rpc error: code = NotFound desc = could not find container \"75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7\": container with ID starting with 75e2694102ce435c09c0f5509e9e8976c8cf657e67de7dc2133183eea40dfff7 not found: ID does not exist" Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.894110 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:05:51 crc kubenswrapper[5004]: I1208 19:05:51.897859 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:05:52 crc kubenswrapper[5004]: I1208 19:05:52.025940 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:05:52 crc kubenswrapper[5004]: I1208 19:05:52.717924 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="298ccd50-a2d9-43d0-9057-826621792577" path="/var/lib/kubelet/pods/298ccd50-a2d9-43d0-9057-826621792577/volumes" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.760322 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-tz7s5"] Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.761058 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="298ccd50-a2d9-43d0-9057-826621792577" containerName="git-clone" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.761092 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="298ccd50-a2d9-43d0-9057-826621792577" containerName="git-clone" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.761242 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="298ccd50-a2d9-43d0-9057-826621792577" containerName="git-clone" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.764711 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-tz7s5" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.767729 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-vr7nb\"" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.785835 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-tz7s5"] Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.824048 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/298b64d9-2e2d-4190-8446-29861c4e704a-bound-sa-token\") pod \"cert-manager-858d87f86b-tz7s5\" (UID: \"298b64d9-2e2d-4190-8446-29861c4e704a\") " pod="cert-manager/cert-manager-858d87f86b-tz7s5" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.824120 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6n95\" (UniqueName: \"kubernetes.io/projected/298b64d9-2e2d-4190-8446-29861c4e704a-kube-api-access-z6n95\") pod \"cert-manager-858d87f86b-tz7s5\" (UID: \"298b64d9-2e2d-4190-8446-29861c4e704a\") " pod="cert-manager/cert-manager-858d87f86b-tz7s5" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.925170 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/298b64d9-2e2d-4190-8446-29861c4e704a-bound-sa-token\") pod \"cert-manager-858d87f86b-tz7s5\" (UID: \"298b64d9-2e2d-4190-8446-29861c4e704a\") " pod="cert-manager/cert-manager-858d87f86b-tz7s5" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.925213 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6n95\" (UniqueName: \"kubernetes.io/projected/298b64d9-2e2d-4190-8446-29861c4e704a-kube-api-access-z6n95\") pod \"cert-manager-858d87f86b-tz7s5\" (UID: \"298b64d9-2e2d-4190-8446-29861c4e704a\") " pod="cert-manager/cert-manager-858d87f86b-tz7s5" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.957856 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/298b64d9-2e2d-4190-8446-29861c4e704a-bound-sa-token\") pod \"cert-manager-858d87f86b-tz7s5\" (UID: \"298b64d9-2e2d-4190-8446-29861c4e704a\") " pod="cert-manager/cert-manager-858d87f86b-tz7s5" Dec 08 19:05:53 crc kubenswrapper[5004]: I1208 19:05:53.965621 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6n95\" (UniqueName: \"kubernetes.io/projected/298b64d9-2e2d-4190-8446-29861c4e704a-kube-api-access-z6n95\") pod \"cert-manager-858d87f86b-tz7s5\" (UID: \"298b64d9-2e2d-4190-8446-29861c4e704a\") " pod="cert-manager/cert-manager-858d87f86b-tz7s5" Dec 08 19:05:54 crc kubenswrapper[5004]: I1208 19:05:54.081319 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-tz7s5" Dec 08 19:05:54 crc kubenswrapper[5004]: I1208 19:05:54.371198 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-tz7s5"] Dec 08 19:05:54 crc kubenswrapper[5004]: I1208 19:05:54.841148 5004 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-95tr5" Dec 08 19:05:54 crc kubenswrapper[5004]: I1208 19:05:54.875967 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-tz7s5" event={"ID":"298b64d9-2e2d-4190-8446-29861c4e704a","Type":"ContainerStarted","Data":"2d48af8bd675ac86da890da3e4c914de2dc88e14cb1361b0be5463a3bf232b84"} Dec 08 19:05:54 crc kubenswrapper[5004]: I1208 19:05:54.876296 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-tz7s5" event={"ID":"298b64d9-2e2d-4190-8446-29861c4e704a","Type":"ContainerStarted","Data":"7838e89a4ac0236f4e98d49ccb97f8730c9cb9ceb8ed2cc0d62ca9063d56e433"} Dec 08 19:05:54 crc kubenswrapper[5004]: I1208 19:05:54.909479 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-tz7s5" podStartSLOduration=1.909458498 podStartE2EDuration="1.909458498s" podCreationTimestamp="2025-12-08 19:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:05:54.905286955 +0000 UTC m=+888.554195263" watchObservedRunningTime="2025-12-08 19:05:54.909458498 +0000 UTC m=+888.558366806" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.000050 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.000661 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.000714 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.001416 5004 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a43ca7d951e3eaaf8b745ab9b98e0838967e3dd8006f2c846fff37931e0b973"} pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.001478 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" containerID="cri-o://2a43ca7d951e3eaaf8b745ab9b98e0838967e3dd8006f2c846fff37931e0b973" gracePeriod=600 Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.400484 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.406338 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.410830 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.410852 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.410898 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hr5sq\"" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.413948 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.438433 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.510967 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grh4v\" (UniqueName: \"kubernetes.io/projected/08776aa3-4ced-47ff-87ce-ac83422d6c33-kube-api-access-grh4v\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511007 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511027 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511056 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511204 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511336 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511364 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511388 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511443 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511479 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511538 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.511601 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613152 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613194 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613223 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613260 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613286 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613313 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613366 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613397 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613402 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grh4v\" (UniqueName: \"kubernetes.io/projected/08776aa3-4ced-47ff-87ce-ac83422d6c33-kube-api-access-grh4v\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613464 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613499 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613545 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613577 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.613712 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.614006 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.614258 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.614116 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.614298 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.614088 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.614392 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.615112 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.634851 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.634864 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.639341 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grh4v\" (UniqueName: \"kubernetes.io/projected/08776aa3-4ced-47ff-87ce-ac83422d6c33-kube-api-access-grh4v\") pod \"service-telemetry-operator-4-build\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.724876 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.915836 5004 generic.go:358] "Generic (PLEG): container finished" podID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerID="2a43ca7d951e3eaaf8b745ab9b98e0838967e3dd8006f2c846fff37931e0b973" exitCode=0 Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.916006 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerDied","Data":"2a43ca7d951e3eaaf8b745ab9b98e0838967e3dd8006f2c846fff37931e0b973"} Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.916259 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerStarted","Data":"3cd74838e7224901f7e38c38df57a40ad6f7276f3fe12262e14eac81795f83ac"} Dec 08 19:06:01 crc kubenswrapper[5004]: I1208 19:06:01.916283 5004 scope.go:117] "RemoveContainer" containerID="756d17bffa06f06addeab12143ba8c1f1794a66f155e593188473bf5f6da5c51" Dec 08 19:06:02 crc kubenswrapper[5004]: I1208 19:06:02.161135 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:06:02 crc kubenswrapper[5004]: I1208 19:06:02.924094 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"08776aa3-4ced-47ff-87ce-ac83422d6c33","Type":"ContainerStarted","Data":"66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5"} Dec 08 19:06:02 crc kubenswrapper[5004]: I1208 19:06:02.924764 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"08776aa3-4ced-47ff-87ce-ac83422d6c33","Type":"ContainerStarted","Data":"caefe83131d03caa3ea178d0ffada371cdaeeb1a50a752dff956c8ed51e51290"} Dec 08 19:06:02 crc kubenswrapper[5004]: I1208 19:06:02.976195 5004 ???:1] "http: TLS handshake error from 192.168.126.11:38246: no serving certificate available for the kubelet" Dec 08 19:06:04 crc kubenswrapper[5004]: I1208 19:06:04.004155 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:06:04 crc kubenswrapper[5004]: I1208 19:06:04.956424 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="08776aa3-4ced-47ff-87ce-ac83422d6c33" containerName="git-clone" containerID="cri-o://66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5" gracePeriod=30 Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.886032 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_08776aa3-4ced-47ff-87ce-ac83422d6c33/git-clone/0.log" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.886674 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.966289 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_08776aa3-4ced-47ff-87ce-ac83422d6c33/git-clone/0.log" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.966336 5004 generic.go:358] "Generic (PLEG): container finished" podID="08776aa3-4ced-47ff-87ce-ac83422d6c33" containerID="66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5" exitCode=1 Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.966388 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"08776aa3-4ced-47ff-87ce-ac83422d6c33","Type":"ContainerDied","Data":"66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5"} Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.966415 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"08776aa3-4ced-47ff-87ce-ac83422d6c33","Type":"ContainerDied","Data":"caefe83131d03caa3ea178d0ffada371cdaeeb1a50a752dff956c8ed51e51290"} Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.966429 5004 scope.go:117] "RemoveContainer" containerID="66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.966600 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.982383 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-node-pullsecrets\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.982446 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-blob-cache\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.982482 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grh4v\" (UniqueName: \"kubernetes.io/projected/08776aa3-4ced-47ff-87ce-ac83422d6c33-kube-api-access-grh4v\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.982523 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildworkdir\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.982554 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-push\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.982575 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-ca-bundles\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.982951 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.983064 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.983424 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.982644 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-pull\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.983501 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-system-configs\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.983623 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.983758 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-proxy-ca-bundles\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.983882 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.984262 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.984279 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.984333 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-run\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.984386 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-root\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.984412 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildcachedir\") pod \"08776aa3-4ced-47ff-87ce-ac83422d6c33\" (UID: \"08776aa3-4ced-47ff-87ce-ac83422d6c33\") " Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.984625 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.984655 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985085 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985108 5004 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985120 5004 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/08776aa3-4ced-47ff-87ce-ac83422d6c33-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985152 5004 scope.go:117] "RemoveContainer" containerID="66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985184 5004 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985196 5004 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985219 5004 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985228 5004 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985238 5004 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08776aa3-4ced-47ff-87ce-ac83422d6c33-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985249 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/08776aa3-4ced-47ff-87ce-ac83422d6c33-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:05 crc kubenswrapper[5004]: E1208 19:06:05.985668 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5\": container with ID starting with 66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5 not found: ID does not exist" containerID="66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.985696 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5"} err="failed to get container status \"66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5\": rpc error: code = NotFound desc = could not find container \"66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5\": container with ID starting with 66a31b0381c98d78d95ca4879d8efa9e1058aa64f490973e198aa792f490fbf5 not found: ID does not exist" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.987549 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-pull" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-pull") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "builder-dockercfg-hr5sq-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.992607 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-push" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-push") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "builder-dockercfg-hr5sq-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:06:05 crc kubenswrapper[5004]: I1208 19:06:05.993211 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08776aa3-4ced-47ff-87ce-ac83422d6c33-kube-api-access-grh4v" (OuterVolumeSpecName: "kube-api-access-grh4v") pod "08776aa3-4ced-47ff-87ce-ac83422d6c33" (UID: "08776aa3-4ced-47ff-87ce-ac83422d6c33"). InnerVolumeSpecName "kube-api-access-grh4v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:06:06 crc kubenswrapper[5004]: I1208 19:06:06.086823 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grh4v\" (UniqueName: \"kubernetes.io/projected/08776aa3-4ced-47ff-87ce-ac83422d6c33-kube-api-access-grh4v\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:06 crc kubenswrapper[5004]: I1208 19:06:06.087111 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:06 crc kubenswrapper[5004]: I1208 19:06:06.087120 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/08776aa3-4ced-47ff-87ce-ac83422d6c33-builder-dockercfg-hr5sq-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:06 crc kubenswrapper[5004]: I1208 19:06:06.302199 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:06:06 crc kubenswrapper[5004]: I1208 19:06:06.310280 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:06:06 crc kubenswrapper[5004]: I1208 19:06:06.721240 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08776aa3-4ced-47ff-87ce-ac83422d6c33" path="/var/lib/kubelet/pods/08776aa3-4ced-47ff-87ce-ac83422d6c33/volumes" Dec 08 19:06:07 crc kubenswrapper[5004]: I1208 19:06:07.115960 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-wqg6t_5d3eaa17-c643-4536-88a0-a76854e545ab/openshift-config-operator/0.log" Dec 08 19:06:07 crc kubenswrapper[5004]: I1208 19:06:07.135399 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-wqg6t_5d3eaa17-c643-4536-88a0-a76854e545ab/openshift-config-operator/0.log" Dec 08 19:06:07 crc kubenswrapper[5004]: I1208 19:06:07.151122 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxdkt_e00ae10b-1af7-4d7e-aad6-135dac0d2aa5/kube-multus/0.log" Dec 08 19:06:07 crc kubenswrapper[5004]: I1208 19:06:07.151773 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxdkt_e00ae10b-1af7-4d7e-aad6-135dac0d2aa5/kube-multus/0.log" Dec 08 19:06:07 crc kubenswrapper[5004]: I1208 19:06:07.159630 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:06:07 crc kubenswrapper[5004]: I1208 19:06:07.159667 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.440128 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.441428 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="08776aa3-4ced-47ff-87ce-ac83422d6c33" containerName="git-clone" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.441448 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="08776aa3-4ced-47ff-87ce-ac83422d6c33" containerName="git-clone" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.441554 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="08776aa3-4ced-47ff-87ce-ac83422d6c33" containerName="git-clone" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.448611 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.450758 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.450970 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hr5sq\"" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.451229 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.455794 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.464541 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609746 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609789 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609813 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609836 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609855 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rn9m\" (UniqueName: \"kubernetes.io/projected/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-kube-api-access-7rn9m\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609883 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609904 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609957 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.609980 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.610000 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.610046 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.610100 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.710905 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.711267 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.711427 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.711795 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712018 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712008 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712180 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712255 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712377 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712485 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712594 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712702 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7rn9m\" (UniqueName: \"kubernetes.io/projected/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-kube-api-access-7rn9m\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712816 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712913 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.713051 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.713162 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.712842 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.713557 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.713615 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.714042 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.714224 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.719708 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-push\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.733175 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rn9m\" (UniqueName: \"kubernetes.io/projected/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-kube-api-access-7rn9m\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.739518 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.769364 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:15 crc kubenswrapper[5004]: I1208 19:06:15.984910 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:06:16 crc kubenswrapper[5004]: I1208 19:06:16.027120 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae","Type":"ContainerStarted","Data":"7c80b25ff478ae6619641e3382d514062216ecda27fe55b8dc8858b824028ef0"} Dec 08 19:06:17 crc kubenswrapper[5004]: I1208 19:06:17.036352 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae","Type":"ContainerStarted","Data":"43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4"} Dec 08 19:06:17 crc kubenswrapper[5004]: I1208 19:06:17.083781 5004 ???:1] "http: TLS handshake error from 192.168.126.11:50156: no serving certificate available for the kubelet" Dec 08 19:06:18 crc kubenswrapper[5004]: I1208 19:06:18.111065 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.047025 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" containerName="git-clone" containerID="cri-o://43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4" gracePeriod=30 Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.441659 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_7e7c1e94-6cdf-415b-8b6e-62fe87a311ae/git-clone/0.log" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.442136 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565343 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-blob-cache\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565385 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildcachedir\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565413 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-pull\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565436 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-system-configs\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565474 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-proxy-ca-bundles\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565521 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-root\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565573 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rn9m\" (UniqueName: \"kubernetes.io/projected/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-kube-api-access-7rn9m\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565600 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-push\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565659 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildworkdir\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565681 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-ca-bundles\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565708 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-node-pullsecrets\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.565764 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-run\") pod \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\" (UID: \"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae\") " Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.566031 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.566385 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.566457 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.567367 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.567448 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.567949 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.568170 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.568297 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.568350 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.572690 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-pull" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-pull") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "builder-dockercfg-hr5sq-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.572975 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-push" (OuterVolumeSpecName: "builder-dockercfg-hr5sq-push") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "builder-dockercfg-hr5sq-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.574310 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-kube-api-access-7rn9m" (OuterVolumeSpecName: "kube-api-access-7rn9m") pod "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" (UID: "7e7c1e94-6cdf-415b-8b6e-62fe87a311ae"). InnerVolumeSpecName "kube-api-access-7rn9m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.667835 5004 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.668449 5004 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.668529 5004 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.668604 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.668857 5004 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.668915 5004 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.668973 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-pull\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.669030 5004 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.669109 5004 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.669176 5004 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.669236 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7rn9m\" (UniqueName: \"kubernetes.io/projected/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-kube-api-access-7rn9m\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:19 crc kubenswrapper[5004]: I1208 19:06:19.669287 5004 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hr5sq-push\" (UniqueName: \"kubernetes.io/secret/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae-builder-dockercfg-hr5sq-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.054638 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_7e7c1e94-6cdf-415b-8b6e-62fe87a311ae/git-clone/0.log" Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.054691 5004 generic.go:358] "Generic (PLEG): container finished" podID="7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" containerID="43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4" exitCode=1 Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.054740 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae","Type":"ContainerDied","Data":"43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4"} Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.054770 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e7c1e94-6cdf-415b-8b6e-62fe87a311ae","Type":"ContainerDied","Data":"7c80b25ff478ae6619641e3382d514062216ecda27fe55b8dc8858b824028ef0"} Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.054789 5004 scope.go:117] "RemoveContainer" containerID="43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4" Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.054796 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.074897 5004 scope.go:117] "RemoveContainer" containerID="43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4" Dec 08 19:06:20 crc kubenswrapper[5004]: E1208 19:06:20.075388 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4\": container with ID starting with 43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4 not found: ID does not exist" containerID="43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4" Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.075418 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4"} err="failed to get container status \"43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4\": rpc error: code = NotFound desc = could not find container \"43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4\": container with ID starting with 43014d3f99c3b609bd841118846c23548aa8441affeeb8fb78b0a3bd1af08bd4 not found: ID does not exist" Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.084620 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.091706 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:06:20 crc kubenswrapper[5004]: I1208 19:06:20.717094 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" path="/var/lib/kubelet/pods/7e7c1e94-6cdf-415b-8b6e-62fe87a311ae/volumes" Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.842842 5004 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2grff/must-gather-5hgb8"] Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.844162 5004 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" containerName="git-clone" Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.844195 5004 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" containerName="git-clone" Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.844365 5004 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e7c1e94-6cdf-415b-8b6e-62fe87a311ae" containerName="git-clone" Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.855707 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2grff/must-gather-5hgb8"] Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.855948 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.857936 5004 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-2grff\"/\"default-dockercfg-4gvhr\"" Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.858492 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-2grff\"/\"openshift-service-ca.crt\"" Dec 08 19:07:07 crc kubenswrapper[5004]: I1208 19:07:07.858666 5004 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-2grff\"/\"kube-root-ca.crt\"" Dec 08 19:07:08 crc kubenswrapper[5004]: I1208 19:07:08.036957 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9b866a0-870d-487a-ab82-3754f4497600-must-gather-output\") pod \"must-gather-5hgb8\" (UID: \"f9b866a0-870d-487a-ab82-3754f4497600\") " pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:07:08 crc kubenswrapper[5004]: I1208 19:07:08.037032 5004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnj7\" (UniqueName: \"kubernetes.io/projected/f9b866a0-870d-487a-ab82-3754f4497600-kube-api-access-mpnj7\") pod \"must-gather-5hgb8\" (UID: \"f9b866a0-870d-487a-ab82-3754f4497600\") " pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:07:08 crc kubenswrapper[5004]: I1208 19:07:08.138613 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9b866a0-870d-487a-ab82-3754f4497600-must-gather-output\") pod \"must-gather-5hgb8\" (UID: \"f9b866a0-870d-487a-ab82-3754f4497600\") " pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:07:08 crc kubenswrapper[5004]: I1208 19:07:08.138710 5004 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mpnj7\" (UniqueName: \"kubernetes.io/projected/f9b866a0-870d-487a-ab82-3754f4497600-kube-api-access-mpnj7\") pod \"must-gather-5hgb8\" (UID: \"f9b866a0-870d-487a-ab82-3754f4497600\") " pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:07:08 crc kubenswrapper[5004]: I1208 19:07:08.139129 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9b866a0-870d-487a-ab82-3754f4497600-must-gather-output\") pod \"must-gather-5hgb8\" (UID: \"f9b866a0-870d-487a-ab82-3754f4497600\") " pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:07:08 crc kubenswrapper[5004]: I1208 19:07:08.172502 5004 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpnj7\" (UniqueName: \"kubernetes.io/projected/f9b866a0-870d-487a-ab82-3754f4497600-kube-api-access-mpnj7\") pod \"must-gather-5hgb8\" (UID: \"f9b866a0-870d-487a-ab82-3754f4497600\") " pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:07:08 crc kubenswrapper[5004]: I1208 19:07:08.173966 5004 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:07:08 crc kubenswrapper[5004]: I1208 19:07:08.628776 5004 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2grff/must-gather-5hgb8"] Dec 08 19:07:09 crc kubenswrapper[5004]: I1208 19:07:09.373429 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2grff/must-gather-5hgb8" event={"ID":"f9b866a0-870d-487a-ab82-3754f4497600","Type":"ContainerStarted","Data":"4da7d4f6c87801f0abc64294457d062f854d142de7cf27df99bade3082caa626"} Dec 08 19:07:16 crc kubenswrapper[5004]: I1208 19:07:16.413782 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2grff/must-gather-5hgb8" event={"ID":"f9b866a0-870d-487a-ab82-3754f4497600","Type":"ContainerStarted","Data":"bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30"} Dec 08 19:07:16 crc kubenswrapper[5004]: I1208 19:07:16.415600 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2grff/must-gather-5hgb8" event={"ID":"f9b866a0-870d-487a-ab82-3754f4497600","Type":"ContainerStarted","Data":"3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82"} Dec 08 19:07:16 crc kubenswrapper[5004]: I1208 19:07:16.435613 5004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2grff/must-gather-5hgb8" podStartSLOduration=2.5343652260000002 podStartE2EDuration="9.435594315s" podCreationTimestamp="2025-12-08 19:07:07 +0000 UTC" firstStartedPulling="2025-12-08 19:07:08.647901317 +0000 UTC m=+962.296809625" lastFinishedPulling="2025-12-08 19:07:15.549130406 +0000 UTC m=+969.198038714" observedRunningTime="2025-12-08 19:07:16.429284982 +0000 UTC m=+970.078193310" watchObservedRunningTime="2025-12-08 19:07:16.435594315 +0000 UTC m=+970.084502624" Dec 08 19:07:17 crc kubenswrapper[5004]: I1208 19:07:17.825658 5004 ???:1] "http: TLS handshake error from 192.168.126.11:46316: no serving certificate available for the kubelet" Dec 08 19:07:53 crc kubenswrapper[5004]: I1208 19:07:53.571744 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51324: no serving certificate available for the kubelet" Dec 08 19:07:53 crc kubenswrapper[5004]: E1208 19:07:53.651779 5004 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 19:07:53 crc kubenswrapper[5004]: I1208 19:07:53.693809 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51334: no serving certificate available for the kubelet" Dec 08 19:07:53 crc kubenswrapper[5004]: I1208 19:07:53.710211 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51336: no serving certificate available for the kubelet" Dec 08 19:07:55 crc kubenswrapper[5004]: I1208 19:07:55.681023 5004 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 19:07:55 crc kubenswrapper[5004]: I1208 19:07:55.692184 5004 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:07:55 crc kubenswrapper[5004]: I1208 19:07:55.708746 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59034: no serving certificate available for the kubelet" Dec 08 19:07:55 crc kubenswrapper[5004]: I1208 19:07:55.807605 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59042: no serving certificate available for the kubelet" Dec 08 19:07:55 crc kubenswrapper[5004]: I1208 19:07:55.839677 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59056: no serving certificate available for the kubelet" Dec 08 19:07:55 crc kubenswrapper[5004]: I1208 19:07:55.878933 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59072: no serving certificate available for the kubelet" Dec 08 19:07:55 crc kubenswrapper[5004]: I1208 19:07:55.946635 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59080: no serving certificate available for the kubelet" Dec 08 19:07:56 crc kubenswrapper[5004]: I1208 19:07:56.053598 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59092: no serving certificate available for the kubelet" Dec 08 19:07:56 crc kubenswrapper[5004]: I1208 19:07:56.233204 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59104: no serving certificate available for the kubelet" Dec 08 19:07:56 crc kubenswrapper[5004]: I1208 19:07:56.573762 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59120: no serving certificate available for the kubelet" Dec 08 19:07:57 crc kubenswrapper[5004]: I1208 19:07:57.244641 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59134: no serving certificate available for the kubelet" Dec 08 19:07:58 crc kubenswrapper[5004]: I1208 19:07:58.548743 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59150: no serving certificate available for the kubelet" Dec 08 19:08:01 crc kubenswrapper[5004]: I1208 19:08:01.129810 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59166: no serving certificate available for the kubelet" Dec 08 19:08:05 crc kubenswrapper[5004]: I1208 19:08:05.050558 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59178: no serving certificate available for the kubelet" Dec 08 19:08:05 crc kubenswrapper[5004]: I1208 19:08:05.175391 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59186: no serving certificate available for the kubelet" Dec 08 19:08:05 crc kubenswrapper[5004]: I1208 19:08:05.233263 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59188: no serving certificate available for the kubelet" Dec 08 19:08:06 crc kubenswrapper[5004]: I1208 19:08:06.274656 5004 ???:1] "http: TLS handshake error from 192.168.126.11:54502: no serving certificate available for the kubelet" Dec 08 19:08:16 crc kubenswrapper[5004]: I1208 19:08:16.551505 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34896: no serving certificate available for the kubelet" Dec 08 19:08:19 crc kubenswrapper[5004]: I1208 19:08:19.497754 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34908: no serving certificate available for the kubelet" Dec 08 19:08:19 crc kubenswrapper[5004]: I1208 19:08:19.713946 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34910: no serving certificate available for the kubelet" Dec 08 19:08:19 crc kubenswrapper[5004]: I1208 19:08:19.718670 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34924: no serving certificate available for the kubelet" Dec 08 19:08:19 crc kubenswrapper[5004]: I1208 19:08:19.724948 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34938: no serving certificate available for the kubelet" Dec 08 19:08:19 crc kubenswrapper[5004]: I1208 19:08:19.873971 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34944: no serving certificate available for the kubelet" Dec 08 19:08:19 crc kubenswrapper[5004]: I1208 19:08:19.874463 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34954: no serving certificate available for the kubelet" Dec 08 19:08:19 crc kubenswrapper[5004]: I1208 19:08:19.912579 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34962: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.060113 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34978: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.207952 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35004: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.208733 5004 ???:1] "http: TLS handshake error from 192.168.126.11:34988: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.271715 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35008: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.421091 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35016: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.437902 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35026: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.497806 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35028: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.603663 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35040: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.798048 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35056: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.804656 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35066: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.805044 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35068: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.934359 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35076: no serving certificate available for the kubelet" Dec 08 19:08:20 crc kubenswrapper[5004]: I1208 19:08:20.978938 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35084: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.027396 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35100: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.149260 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35110: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.310982 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35126: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.347023 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35140: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.394798 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35150: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.533216 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35154: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.566586 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35158: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.585507 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35168: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.719083 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35184: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.881450 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35198: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.903342 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35204: no serving certificate available for the kubelet" Dec 08 19:08:21 crc kubenswrapper[5004]: I1208 19:08:21.954376 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35216: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.140122 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35226: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.179977 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35238: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.184402 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35250: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.239420 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35258: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.380001 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35274: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.392799 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35276: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.409538 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35290: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.536514 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35298: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.559164 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35306: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.587729 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35312: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.617548 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35326: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.770218 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35332: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.903696 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35344: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.905774 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35360: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[5004]: I1208 19:08:22.914203 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35376: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[5004]: I1208 19:08:23.077536 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35390: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[5004]: I1208 19:08:23.081709 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35402: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[5004]: I1208 19:08:23.083931 5004 ???:1] "http: TLS handshake error from 192.168.126.11:35404: no serving certificate available for the kubelet" Dec 08 19:08:31 crc kubenswrapper[5004]: I1208 19:08:31.000340 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:08:31 crc kubenswrapper[5004]: I1208 19:08:31.001697 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:08:33 crc kubenswrapper[5004]: I1208 19:08:33.882432 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59608: no serving certificate available for the kubelet" Dec 08 19:08:34 crc kubenswrapper[5004]: I1208 19:08:34.081104 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59620: no serving certificate available for the kubelet" Dec 08 19:08:34 crc kubenswrapper[5004]: I1208 19:08:34.100151 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59632: no serving certificate available for the kubelet" Dec 08 19:08:34 crc kubenswrapper[5004]: I1208 19:08:34.258582 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59646: no serving certificate available for the kubelet" Dec 08 19:08:34 crc kubenswrapper[5004]: I1208 19:08:34.294452 5004 ???:1] "http: TLS handshake error from 192.168.126.11:59660: no serving certificate available for the kubelet" Dec 08 19:08:37 crc kubenswrapper[5004]: I1208 19:08:37.053814 5004 ???:1] "http: TLS handshake error from 192.168.126.11:44926: no serving certificate available for the kubelet" Dec 08 19:09:01 crc kubenswrapper[5004]: I1208 19:09:01.000234 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:09:01 crc kubenswrapper[5004]: I1208 19:09:01.000800 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:09:13 crc kubenswrapper[5004]: I1208 19:09:13.160212 5004 generic.go:358] "Generic (PLEG): container finished" podID="f9b866a0-870d-487a-ab82-3754f4497600" containerID="3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82" exitCode=0 Dec 08 19:09:13 crc kubenswrapper[5004]: I1208 19:09:13.160301 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2grff/must-gather-5hgb8" event={"ID":"f9b866a0-870d-487a-ab82-3754f4497600","Type":"ContainerDied","Data":"3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82"} Dec 08 19:09:13 crc kubenswrapper[5004]: I1208 19:09:13.161143 5004 scope.go:117] "RemoveContainer" containerID="3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82" Dec 08 19:09:17 crc kubenswrapper[5004]: I1208 19:09:17.840923 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51250: no serving certificate available for the kubelet" Dec 08 19:09:17 crc kubenswrapper[5004]: I1208 19:09:17.965598 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51266: no serving certificate available for the kubelet" Dec 08 19:09:17 crc kubenswrapper[5004]: I1208 19:09:17.975754 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51282: no serving certificate available for the kubelet" Dec 08 19:09:17 crc kubenswrapper[5004]: I1208 19:09:17.997366 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51292: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.007173 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51306: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.021246 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51312: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.033255 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51326: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.041969 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51342: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.049739 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51348: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.061725 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51362: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.195991 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51374: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.206460 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51388: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.236107 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51398: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.247223 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51414: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.260562 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51418: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.270930 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51432: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.281864 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51448: no serving certificate available for the kubelet" Dec 08 19:09:18 crc kubenswrapper[5004]: I1208 19:09:18.290641 5004 ???:1] "http: TLS handshake error from 192.168.126.11:51452: no serving certificate available for the kubelet" Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.328419 5004 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2grff/must-gather-5hgb8"] Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.329136 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-2grff/must-gather-5hgb8" podUID="f9b866a0-870d-487a-ab82-3754f4497600" containerName="copy" containerID="cri-o://bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30" gracePeriod=2 Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.332193 5004 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2grff/must-gather-5hgb8"] Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.333993 5004 status_manager.go:895] "Failed to get status for pod" podUID="f9b866a0-870d-487a-ab82-3754f4497600" pod="openshift-must-gather-2grff/must-gather-5hgb8" err="pods \"must-gather-5hgb8\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-2grff\": no relationship found between node 'crc' and this object" Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.689267 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2grff_must-gather-5hgb8_f9b866a0-870d-487a-ab82-3754f4497600/copy/0.log" Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.689985 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.691638 5004 status_manager.go:895] "Failed to get status for pod" podUID="f9b866a0-870d-487a-ab82-3754f4497600" pod="openshift-must-gather-2grff/must-gather-5hgb8" err="pods \"must-gather-5hgb8\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-2grff\": no relationship found between node 'crc' and this object" Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.717951 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpnj7\" (UniqueName: \"kubernetes.io/projected/f9b866a0-870d-487a-ab82-3754f4497600-kube-api-access-mpnj7\") pod \"f9b866a0-870d-487a-ab82-3754f4497600\" (UID: \"f9b866a0-870d-487a-ab82-3754f4497600\") " Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.718133 5004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9b866a0-870d-487a-ab82-3754f4497600-must-gather-output\") pod \"f9b866a0-870d-487a-ab82-3754f4497600\" (UID: \"f9b866a0-870d-487a-ab82-3754f4497600\") " Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.724544 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b866a0-870d-487a-ab82-3754f4497600-kube-api-access-mpnj7" (OuterVolumeSpecName: "kube-api-access-mpnj7") pod "f9b866a0-870d-487a-ab82-3754f4497600" (UID: "f9b866a0-870d-487a-ab82-3754f4497600"). InnerVolumeSpecName "kube-api-access-mpnj7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.762844 5004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b866a0-870d-487a-ab82-3754f4497600-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f9b866a0-870d-487a-ab82-3754f4497600" (UID: "f9b866a0-870d-487a-ab82-3754f4497600"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.820395 5004 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9b866a0-870d-487a-ab82-3754f4497600-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 19:09:23 crc kubenswrapper[5004]: I1208 19:09:23.820433 5004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpnj7\" (UniqueName: \"kubernetes.io/projected/f9b866a0-870d-487a-ab82-3754f4497600-kube-api-access-mpnj7\") on node \"crc\" DevicePath \"\"" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.229843 5004 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2grff_must-gather-5hgb8_f9b866a0-870d-487a-ab82-3754f4497600/copy/0.log" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.230530 5004 generic.go:358] "Generic (PLEG): container finished" podID="f9b866a0-870d-487a-ab82-3754f4497600" containerID="bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30" exitCode=143 Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.230665 5004 scope.go:117] "RemoveContainer" containerID="bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.230625 5004 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2grff/must-gather-5hgb8" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.232248 5004 status_manager.go:895] "Failed to get status for pod" podUID="f9b866a0-870d-487a-ab82-3754f4497600" pod="openshift-must-gather-2grff/must-gather-5hgb8" err="pods \"must-gather-5hgb8\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-2grff\": no relationship found between node 'crc' and this object" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.253379 5004 status_manager.go:895] "Failed to get status for pod" podUID="f9b866a0-870d-487a-ab82-3754f4497600" pod="openshift-must-gather-2grff/must-gather-5hgb8" err="pods \"must-gather-5hgb8\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-2grff\": no relationship found between node 'crc' and this object" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.267921 5004 scope.go:117] "RemoveContainer" containerID="3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.327821 5004 scope.go:117] "RemoveContainer" containerID="bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30" Dec 08 19:09:24 crc kubenswrapper[5004]: E1208 19:09:24.328409 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30\": container with ID starting with bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30 not found: ID does not exist" containerID="bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.328445 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30"} err="failed to get container status \"bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30\": rpc error: code = NotFound desc = could not find container \"bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30\": container with ID starting with bb6e52d08bdb15edb6c5743d11a9c7ad63004c2ec4d22145fd3af2167414ad30 not found: ID does not exist" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.328465 5004 scope.go:117] "RemoveContainer" containerID="3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82" Dec 08 19:09:24 crc kubenswrapper[5004]: E1208 19:09:24.328742 5004 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82\": container with ID starting with 3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82 not found: ID does not exist" containerID="3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.328784 5004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82"} err="failed to get container status \"3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82\": rpc error: code = NotFound desc = could not find container \"3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82\": container with ID starting with 3509d3308c10a88260481e1a2c9b8b24b485a0328c81c266d3fe234a1de1dd82 not found: ID does not exist" Dec 08 19:09:24 crc kubenswrapper[5004]: I1208 19:09:24.716766 5004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b866a0-870d-487a-ab82-3754f4497600" path="/var/lib/kubelet/pods/f9b866a0-870d-487a-ab82-3754f4497600/volumes" Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:30.999908 5004 patch_prober.go:28] interesting pod/machine-config-daemon-xnzfz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:31.000220 5004 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:31.000275 5004 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:31.000822 5004 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cd74838e7224901f7e38c38df57a40ad6f7276f3fe12262e14eac81795f83ac"} pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:31.000874 5004 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" podUID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerName="machine-config-daemon" containerID="cri-o://3cd74838e7224901f7e38c38df57a40ad6f7276f3fe12262e14eac81795f83ac" gracePeriod=600 Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:31.276605 5004 generic.go:358] "Generic (PLEG): container finished" podID="5db7afc3-55ae-4aa9-9946-c263aeffae20" containerID="3cd74838e7224901f7e38c38df57a40ad6f7276f3fe12262e14eac81795f83ac" exitCode=0 Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:31.276674 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerDied","Data":"3cd74838e7224901f7e38c38df57a40ad6f7276f3fe12262e14eac81795f83ac"} Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:31.277396 5004 scope.go:117] "RemoveContainer" containerID="2a43ca7d951e3eaaf8b745ab9b98e0838967e3dd8006f2c846fff37931e0b973" Dec 08 19:09:31 crc kubenswrapper[5004]: I1208 19:09:31.750443 5004 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:09:32 crc kubenswrapper[5004]: I1208 19:09:32.285518 5004 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xnzfz" event={"ID":"5db7afc3-55ae-4aa9-9946-c263aeffae20","Type":"ContainerStarted","Data":"3ae9b9c0d72bc96802f6a54461e57d3c6c2d53a21458aea78d5f56316e6802cf"}