Dec 12 15:21:01 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 15:21:02 crc kubenswrapper[5099]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:21:02 crc kubenswrapper[5099]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 15:21:02 crc kubenswrapper[5099]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:21:02 crc kubenswrapper[5099]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:21:02 crc kubenswrapper[5099]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 15:21:02 crc kubenswrapper[5099]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.131997 5099 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136207 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136234 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136241 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136247 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136253 5099 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136260 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136266 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136273 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136278 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136283 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136289 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136293 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136310 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136317 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136324 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136332 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136337 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136343 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136347 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136355 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136361 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136367 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136372 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136377 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136383 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136388 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136394 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136399 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136404 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136409 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136414 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136420 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136426 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136431 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136436 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136441 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136446 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136452 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136457 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136462 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136468 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136473 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136478 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136482 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136487 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136493 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136498 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136503 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136508 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136513 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136518 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136522 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136528 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136533 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136537 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136542 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136548 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136554 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136577 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136584 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136591 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136604 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136611 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136617 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136629 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136635 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136640 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136647 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136653 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136682 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136689 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136695 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136701 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136707 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136714 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136719 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136726 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136737 5099 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136745 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136752 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136758 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136764 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136771 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136777 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136783 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.136789 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137443 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137456 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137461 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137467 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137472 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137477 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137482 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137487 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137492 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137497 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137517 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137530 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137545 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137550 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137555 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137566 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137576 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137582 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137587 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137592 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137597 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137603 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137610 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137616 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137622 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137627 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137632 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137637 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137642 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137647 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137652 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137657 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137682 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137687 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137692 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137697 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137704 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137710 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137716 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137721 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137727 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137732 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137746 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137786 5099 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137794 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137800 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137805 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137810 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137815 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137820 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137825 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137831 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137836 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137841 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137846 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137852 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137857 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137863 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137868 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137873 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137878 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137883 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137887 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137892 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137897 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137902 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137907 5099 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137912 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137918 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137923 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137928 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137934 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137938 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137943 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137957 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137984 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.137995 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138000 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138005 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138010 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138015 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138020 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138025 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138030 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138034 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.138039 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138712 5099 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138738 5099 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138755 5099 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138764 5099 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138774 5099 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138781 5099 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138791 5099 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138801 5099 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138808 5099 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138815 5099 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138823 5099 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138830 5099 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138838 5099 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138845 5099 flags.go:64] FLAG: --cgroup-root="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138851 5099 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138858 5099 flags.go:64] FLAG: --client-ca-file="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138865 5099 flags.go:64] FLAG: --cloud-config="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138872 5099 flags.go:64] FLAG: --cloud-provider="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138878 5099 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138887 5099 flags.go:64] FLAG: --cluster-domain="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138894 5099 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138932 5099 flags.go:64] FLAG: --config-dir="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138939 5099 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138947 5099 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138956 5099 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138962 5099 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138969 5099 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138977 5099 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138984 5099 flags.go:64] FLAG: --contention-profiling="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138992 5099 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.138999 5099 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139006 5099 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139014 5099 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139024 5099 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139031 5099 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139038 5099 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139045 5099 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139056 5099 flags.go:64] FLAG: --enable-server="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139064 5099 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139075 5099 flags.go:64] FLAG: --event-burst="100" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139084 5099 flags.go:64] FLAG: --event-qps="50" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139090 5099 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139096 5099 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139102 5099 flags.go:64] FLAG: --eviction-hard="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139109 5099 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139114 5099 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139120 5099 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139126 5099 flags.go:64] FLAG: --eviction-soft="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139132 5099 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139137 5099 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139143 5099 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139148 5099 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139155 5099 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139183 5099 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139189 5099 flags.go:64] FLAG: --feature-gates="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139196 5099 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139202 5099 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139208 5099 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139214 5099 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139219 5099 flags.go:64] FLAG: --healthz-port="10248" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139225 5099 flags.go:64] FLAG: --help="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139232 5099 flags.go:64] FLAG: --hostname-override="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139239 5099 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139247 5099 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139254 5099 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139261 5099 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139268 5099 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139275 5099 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139281 5099 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139288 5099 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139296 5099 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139303 5099 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139311 5099 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139319 5099 flags.go:64] FLAG: --kube-reserved="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139326 5099 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139333 5099 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139341 5099 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139347 5099 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139355 5099 flags.go:64] FLAG: --lock-file="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139362 5099 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139370 5099 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139377 5099 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139390 5099 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139396 5099 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139405 5099 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139446 5099 flags.go:64] FLAG: --logging-format="text" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139455 5099 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139461 5099 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139466 5099 flags.go:64] FLAG: --manifest-url="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139472 5099 flags.go:64] FLAG: --manifest-url-header="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139484 5099 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139491 5099 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139501 5099 flags.go:64] FLAG: --max-pods="110" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139509 5099 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139516 5099 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139523 5099 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139530 5099 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139538 5099 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139560 5099 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139566 5099 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139581 5099 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139586 5099 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139592 5099 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139601 5099 flags.go:64] FLAG: --pod-cidr="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139608 5099 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139621 5099 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139628 5099 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139636 5099 flags.go:64] FLAG: --pods-per-core="0" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139643 5099 flags.go:64] FLAG: --port="10250" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139650 5099 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139657 5099 flags.go:64] FLAG: --provider-id="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139689 5099 flags.go:64] FLAG: --qos-reserved="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139697 5099 flags.go:64] FLAG: --read-only-port="10255" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139704 5099 flags.go:64] FLAG: --register-node="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139711 5099 flags.go:64] FLAG: --register-schedulable="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139718 5099 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139731 5099 flags.go:64] FLAG: --registry-burst="10" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139738 5099 flags.go:64] FLAG: --registry-qps="5" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139776 5099 flags.go:64] FLAG: --reserved-cpus="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139783 5099 flags.go:64] FLAG: --reserved-memory="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139793 5099 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139800 5099 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139807 5099 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139814 5099 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139821 5099 flags.go:64] FLAG: --runonce="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139828 5099 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139835 5099 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139843 5099 flags.go:64] FLAG: --seccomp-default="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139850 5099 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139855 5099 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139864 5099 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139870 5099 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139876 5099 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139881 5099 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139887 5099 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139892 5099 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139899 5099 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139905 5099 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139911 5099 flags.go:64] FLAG: --system-cgroups="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139916 5099 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139925 5099 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139931 5099 flags.go:64] FLAG: --tls-cert-file="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139936 5099 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139943 5099 flags.go:64] FLAG: --tls-min-version="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139949 5099 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139954 5099 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139960 5099 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139965 5099 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139971 5099 flags.go:64] FLAG: --v="2" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.139979 5099 flags.go:64] FLAG: --version="false" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.140011 5099 flags.go:64] FLAG: --vmodule="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.140019 5099 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.140025 5099 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140180 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140188 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140194 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140199 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140205 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140210 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140215 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140222 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140227 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140234 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140239 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140244 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140249 5099 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140254 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140260 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140264 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140270 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140275 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140280 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140286 5099 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140291 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140296 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140300 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140307 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140314 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140320 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140327 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140333 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140339 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140346 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140352 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140357 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140362 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140368 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140373 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140379 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140384 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140388 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140393 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140398 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140404 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140409 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140414 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140419 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140423 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140428 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140434 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140439 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140443 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140449 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140454 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140459 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140465 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140469 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140474 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140479 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140484 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140489 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140494 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140499 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140504 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140509 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140514 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140519 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140524 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140529 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140534 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140539 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140544 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140548 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140553 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140558 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140563 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140568 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140573 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140578 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140583 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140589 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140594 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140599 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140605 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140610 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140616 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140621 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140626 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.140631 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.140650 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.152050 5099 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.152489 5099 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152596 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152617 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152625 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152632 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152641 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152649 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152655 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152701 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152709 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152716 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152721 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152728 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152737 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152744 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152751 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152757 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152763 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152770 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152777 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152783 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152789 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152795 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152802 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152819 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152827 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152833 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152840 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152846 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152853 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152859 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152866 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152872 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152878 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152887 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152898 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152906 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152915 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152924 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152947 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152954 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152961 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152967 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.152975 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153030 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153041 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153049 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153056 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153063 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153070 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153077 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153083 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153090 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153096 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153102 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153108 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153116 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153123 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153143 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153165 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153179 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153185 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153191 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153199 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153208 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153215 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153222 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153229 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153235 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153240 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153246 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153252 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153258 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153264 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153269 5099 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153283 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153288 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153330 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153339 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153344 5099 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153355 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153366 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153372 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153377 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153382 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153387 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153392 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.153401 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153569 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153580 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153586 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153592 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153597 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153604 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153611 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153617 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153622 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153627 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153633 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153637 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153642 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153647 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153653 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153682 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153689 5099 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153694 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153699 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153704 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153719 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153739 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153764 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153771 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153786 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153796 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153812 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153817 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153822 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153827 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153832 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153838 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153842 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153847 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153852 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153857 5099 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153862 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153867 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153872 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153877 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153882 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153887 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153892 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153899 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153904 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153909 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153914 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153919 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153924 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153929 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153934 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153938 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153945 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153950 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153955 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153973 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153982 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153990 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.153997 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154004 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154011 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154017 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154024 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154030 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154037 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154043 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154049 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154055 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154061 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154067 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154073 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154079 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154085 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154090 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154095 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154100 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154105 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154110 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154115 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154120 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154125 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154130 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154135 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154140 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154146 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.154151 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.154160 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.154755 5099 server.go:962] "Client rotation is on, will bootstrap in background" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.162434 5099 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.166420 5099 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.166595 5099 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.167482 5099 server.go:1019] "Starting client certificate rotation" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.167704 5099 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.167784 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.252410 5099 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.258186 5099 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.277562 5099 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.281864 5099 log.go:25] "Validated CRI v1 runtime API" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.303052 5099 log.go:25] "Validated CRI v1 image API" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.304505 5099 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.310610 5099 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-12-15-14-34-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.310676 5099 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.329972 5099 manager.go:217] Machine: {Timestamp:2025-12-12 15:21:02.328333655 +0000 UTC m=+0.432242316 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:c9a395de-6bcb-4f0c-8f70-eabd9ff65c63 BootID:233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:bb:87:1b Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:bb:87:1b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:56:96:f7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4b:3d:c3 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:53:eb:9a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:55:ba:07 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:96:c9:de:62:be Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e2:ce:b7:ef:9c:6a Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.330284 5099 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.330490 5099 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.331648 5099 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.331704 5099 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.331897 5099 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.331910 5099 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.331933 5099 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.332136 5099 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.332464 5099 state_mem.go:36] "Initialized new in-memory state store" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.332633 5099 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.333789 5099 kubelet.go:491] "Attempting to sync node with API server" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.333814 5099 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.333849 5099 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.333865 5099 kubelet.go:397] "Adding apiserver pod source" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.333883 5099 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.335814 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.335820 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.335937 5099 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.336120 5099 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.337247 5099 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.337334 5099 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.338819 5099 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.343121 5099 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.343656 5099 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344186 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344244 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344252 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344259 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344268 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344279 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344287 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344293 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344301 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344321 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344348 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344515 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344753 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.344766 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.345708 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.359115 5099 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.359203 5099 server.go:1295] "Started kubelet" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.360187 5099 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.360174 5099 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 15:21:02 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.360434 5099 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.368792 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188080f959178adf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.359161567 +0000 UTC m=+0.463070208,LastTimestamp:2025-12-12 15:21:02.359161567 +0000 UTC m=+0.463070208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.386858 5099 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.388754 5099 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.389890 5099 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.390378 5099 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.390400 5099 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.390464 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.390497 5099 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.390956 5099 server.go:317] "Adding debug handlers to kubelet server" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.391030 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.391363 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="200ms" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.396547 5099 factory.go:153] Registering CRI-O factory Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.396661 5099 factory.go:223] Registration of the crio container factory successfully Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.396908 5099 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.396945 5099 factory.go:55] Registering systemd factory Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.396963 5099 factory.go:223] Registration of the systemd container factory successfully Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.397001 5099 factory.go:103] Registering Raw factory Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.397049 5099 manager.go:1196] Started watching for new ooms in manager Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.398864 5099 manager.go:319] Starting recovery of all containers Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425382 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425499 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425514 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425523 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425531 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425540 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425551 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425560 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425609 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425627 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.425643 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427085 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427102 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427113 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427125 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427134 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427142 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427152 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427162 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427170 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427183 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427193 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427200 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427209 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427217 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427225 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427234 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427243 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427273 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427303 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427310 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427319 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427327 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427335 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427348 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427359 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427366 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427373 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427382 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427389 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427397 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427405 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427429 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427439 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427447 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427455 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427462 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427471 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427480 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427489 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427498 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427508 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427517 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427525 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427533 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427541 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427579 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427589 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427597 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427604 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427614 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427621 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427629 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427637 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427645 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427674 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427684 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427693 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427702 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427710 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427722 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427744 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427754 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427768 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427776 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427787 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427799 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427808 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427816 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427829 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427838 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427845 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427854 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427862 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427869 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427877 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427885 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427893 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427900 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427908 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427916 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427924 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427932 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427940 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427948 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427956 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427963 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427971 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427979 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427987 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.427996 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428050 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428067 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428079 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428088 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428097 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428104 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428113 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428122 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428129 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428137 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428149 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428169 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428529 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428583 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428609 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428627 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428649 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428694 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428708 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.428726 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432756 5099 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432819 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432840 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432871 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432888 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432903 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432913 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432926 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432938 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432948 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432960 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432968 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432980 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.432990 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433004 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433034 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433046 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433065 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433079 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433095 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433112 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433125 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433143 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433154 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433168 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433202 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433221 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433233 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433245 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433261 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433276 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433300 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433363 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433380 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433393 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433405 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433421 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433433 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433449 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433460 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433477 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433489 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433501 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433515 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433527 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433542 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433556 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433598 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433614 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433625 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433638 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433651 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433756 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433772 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433780 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433791 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433817 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433830 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433840 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433850 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433861 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433870 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433882 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433892 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433903 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433934 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433944 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433974 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.433987 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434008 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434024 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434042 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434051 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434059 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434069 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434077 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434088 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434096 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434107 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434117 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434127 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434153 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434163 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434173 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434182 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434192 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434201 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434210 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434220 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434228 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434239 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434248 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434258 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434267 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434276 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434289 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434305 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434318 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434334 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434344 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434367 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434381 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434397 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434411 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434533 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434548 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434557 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434605 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434623 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.434636 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.435844 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.435899 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.435921 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.435933 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.435948 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.435962 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436007 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436027 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436043 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436063 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436075 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436088 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436099 5099 reconstruct.go:97] "Volume reconstruction finished" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436106 5099 reconciler.go:26] "Reconciler: start to sync state" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.436983 5099 manager.go:324] Recovery completed Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.449117 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.450610 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.450677 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.450693 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.451415 5099 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.451545 5099 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.451563 5099 state_mem.go:36] "Initialized new in-memory state store" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.456217 5099 policy_none.go:49] "None policy: Start" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.456261 5099 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.456304 5099 state_mem.go:35] "Initializing new in-memory state store" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.462635 5099 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.465268 5099 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.465506 5099 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.465542 5099 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.465556 5099 kubelet.go:2451] "Starting kubelet main sync loop" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.465690 5099 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.466800 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.490655 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.505312 5099 manager.go:341] "Starting Device Plugin manager" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.505713 5099 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.505744 5099 server.go:85] "Starting device plugin registration server" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.506540 5099 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.506587 5099 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.506778 5099 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.506897 5099 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.506914 5099 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.510718 5099 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.510785 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.565796 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.566105 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.567202 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.567476 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.567492 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.568794 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.569084 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.569185 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.570025 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.570064 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.570082 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.570241 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.570536 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.570585 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.571182 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.571314 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.571377 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.571868 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.571898 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.571910 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.572000 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.572023 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.572033 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.572815 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.572891 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.572961 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.573527 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.573549 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.573577 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.573705 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.573775 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.573808 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.574620 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.574763 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.574807 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.575452 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.575487 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.575505 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.575536 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.575557 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.575571 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.576508 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.576550 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.577522 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.577556 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.577568 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.596910 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="400ms" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.606960 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.608215 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.608299 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.608430 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.608501 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.609424 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.612461 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.619180 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.638936 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.639528 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.639558 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.639587 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.639883 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.639957 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.639981 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640000 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640021 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640101 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640188 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640248 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640272 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640331 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640356 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640417 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640454 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640503 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640552 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640556 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640574 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640825 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640855 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640881 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.640902 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.641088 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.641719 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.641915 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.645579 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.645878 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.645953 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.653690 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.655119 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746591 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746655 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746699 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746742 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746776 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746801 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746822 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746843 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746868 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746887 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746906 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746927 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746953 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746972 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.746990 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747011 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747430 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747491 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747518 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747581 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747567 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747621 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747633 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747675 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747697 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747719 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747726 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747779 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747815 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747867 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747892 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.747946 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.810444 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.811591 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.811651 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.811684 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.811714 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.812328 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.914043 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.920408 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.940162 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.946163 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-e782fd6ab8bf62af6ca8daf8c22631454017166f4fb19304687917578bef8f81 WatchSource:0}: Error finding container e782fd6ab8bf62af6ca8daf8c22631454017166f4fb19304687917578bef8f81: Status 404 returned error can't find the container with id e782fd6ab8bf62af6ca8daf8c22631454017166f4fb19304687917578bef8f81 Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.948174 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-824d49edb6423382b514d653f922d78280b297613dd5624c771944b5544497f2 WatchSource:0}: Error finding container 824d49edb6423382b514d653f922d78280b297613dd5624c771944b5544497f2: Status 404 returned error can't find the container with id 824d49edb6423382b514d653f922d78280b297613dd5624c771944b5544497f2 Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.951392 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.954400 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: I1212 15:21:02.955361 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.960981 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-6e523432ca44e9b7f8553c12a2096cfd42f40870ecb0aa8b9efee94f146d7ec5 WatchSource:0}: Error finding container 6e523432ca44e9b7f8553c12a2096cfd42f40870ecb0aa8b9efee94f146d7ec5: Status 404 returned error can't find the container with id 6e523432ca44e9b7f8553c12a2096cfd42f40870ecb0aa8b9efee94f146d7ec5 Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.973348 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-1e69278382ccdad7d5c2b6f6156257363c49dc9fc905837ad35bc72183e3ded1 WatchSource:0}: Error finding container 1e69278382ccdad7d5c2b6f6156257363c49dc9fc905837ad35bc72183e3ded1: Status 404 returned error can't find the container with id 1e69278382ccdad7d5c2b6f6156257363c49dc9fc905837ad35bc72183e3ded1 Dec 12 15:21:02 crc kubenswrapper[5099]: W1212 15:21:02.979055 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-d4ddc82d6edc2c7a1c15970fae7e2b2d4b5e7869399d0ffade4f46a738a230db WatchSource:0}: Error finding container d4ddc82d6edc2c7a1c15970fae7e2b2d4b5e7869399d0ffade4f46a738a230db: Status 404 returned error can't find the container with id d4ddc82d6edc2c7a1c15970fae7e2b2d4b5e7869399d0ffade4f46a738a230db Dec 12 15:21:02 crc kubenswrapper[5099]: E1212 15:21:02.998713 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="800ms" Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.212761 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.223586 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.223637 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.223683 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.223711 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:03 crc kubenswrapper[5099]: E1212 15:21:03.224758 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.347027 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Dec 12 15:21:03 crc kubenswrapper[5099]: E1212 15:21:03.351487 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.471173 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d4ddc82d6edc2c7a1c15970fae7e2b2d4b5e7869399d0ffade4f46a738a230db"} Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.472503 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"1e69278382ccdad7d5c2b6f6156257363c49dc9fc905837ad35bc72183e3ded1"} Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.475608 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6e523432ca44e9b7f8553c12a2096cfd42f40870ecb0aa8b9efee94f146d7ec5"} Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.476689 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e782fd6ab8bf62af6ca8daf8c22631454017166f4fb19304687917578bef8f81"} Dec 12 15:21:03 crc kubenswrapper[5099]: I1212 15:21:03.479326 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"824d49edb6423382b514d653f922d78280b297613dd5624c771944b5544497f2"} Dec 12 15:21:03 crc kubenswrapper[5099]: E1212 15:21:03.485258 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:21:03 crc kubenswrapper[5099]: E1212 15:21:03.528944 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:21:03 crc kubenswrapper[5099]: E1212 15:21:03.800106 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="1.6s" Dec 12 15:21:03 crc kubenswrapper[5099]: E1212 15:21:03.841414 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.025904 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.028217 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.028282 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.028292 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.028337 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:04 crc kubenswrapper[5099]: E1212 15:21:04.029222 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.346512 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.466720 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:21:04 crc kubenswrapper[5099]: E1212 15:21:04.468023 5099 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.483337 5099 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="83e5bb42c0cbebaa902028ee4ec0f0e538ae7f1943973db4bd916ee9243d2905" exitCode=0 Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.483382 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"83e5bb42c0cbebaa902028ee4ec0f0e538ae7f1943973db4bd916ee9243d2905"} Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.483491 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.484266 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.484286 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.484295 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:04 crc kubenswrapper[5099]: E1212 15:21:04.484451 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.484911 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"16aff26b4dfdededb7035f2088c1478159d9bc5ea17e8c5d497fb895944d4da9"} Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.486856 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2fc9a00e37b1547b4a00b0a5818ba6fd62e1622ac01a848616d6ea2cb5ae35ac" exitCode=0 Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.486924 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"2fc9a00e37b1547b4a00b0a5818ba6fd62e1622ac01a848616d6ea2cb5ae35ac"} Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.486955 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.487345 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.487362 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.487371 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:04 crc kubenswrapper[5099]: E1212 15:21:04.487499 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.488282 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="5cdfce8f83f2327b25c7f22f1868bafe6e8d636838d248b903618e8c2a4fb08b" exitCode=0 Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.488421 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.488587 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"5cdfce8f83f2327b25c7f22f1868bafe6e8d636838d248b903618e8c2a4fb08b"} Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.488603 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.488890 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.488906 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.488914 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:04 crc kubenswrapper[5099]: E1212 15:21:04.489057 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.489218 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.489238 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.489247 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.489893 5099 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="74f0e2e484450923bf824ae37cf7682525765962ec622b408e3e6cee712d62f5" exitCode=0 Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.489945 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"74f0e2e484450923bf824ae37cf7682525765962ec622b408e3e6cee712d62f5"} Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.490033 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.490456 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.490479 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:04 crc kubenswrapper[5099]: I1212 15:21:04.490489 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:04 crc kubenswrapper[5099]: E1212 15:21:04.490693 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:05 crc kubenswrapper[5099]: E1212 15:21:05.191623 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.346808 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Dec 12 15:21:05 crc kubenswrapper[5099]: E1212 15:21:05.402240 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="3.2s" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.493873 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="670e55a23cf26db079e729fc2d303305c6cfc976cfc2625560170a9773b89d7d" exitCode=0 Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.493934 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"670e55a23cf26db079e729fc2d303305c6cfc976cfc2625560170a9773b89d7d"} Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.494050 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.494735 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.494765 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.494774 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:05 crc kubenswrapper[5099]: E1212 15:21:05.495076 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.498004 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"69f2d84d6e1a7de5e88efac2252c13f4edec36566a6e36ccf8997c2c0e770bbd"} Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.498110 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.501215 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.501240 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.501250 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:05 crc kubenswrapper[5099]: E1212 15:21:05.501430 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.503372 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"99e8ac8cca759b5e97b4837de2b4c47f48e65eba25093437a75f7a3fd54ccf83"} Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.503419 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c1b0af41b3f61573fb3e181d47ddd0779f46f4d5d772f73b74bd2ac804554851"} Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.506823 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e55738e03de3e06ef6f82a4c01bbda678164252cc65a89c3562e80e5451f2cf7"} Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.511194 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ca3285329aa4c3479abd6f07bee49da719a2390ebd7853988f5b4976e7674ea8"} Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.673800 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.677535 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.677591 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.677602 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:05 crc kubenswrapper[5099]: I1212 15:21:05.677626 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:05 crc kubenswrapper[5099]: E1212 15:21:05.678167 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Dec 12 15:21:05 crc kubenswrapper[5099]: E1212 15:21:05.955330 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:21:06 crc kubenswrapper[5099]: E1212 15:21:06.053555 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:21:06 crc kubenswrapper[5099]: E1212 15:21:06.269395 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.346757 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.525075 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="32bb3d2e134475f8b0863f41ba8879d17927a64dc1ee9202458b42cd0c1fcf19" exitCode=0 Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.525258 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"32bb3d2e134475f8b0863f41ba8879d17927a64dc1ee9202458b42cd0c1fcf19"} Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.525328 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.526030 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.526057 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.526067 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:06 crc kubenswrapper[5099]: E1212 15:21:06.526267 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.530829 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"8f29b607f977eaecf63aae1de43b25034927ee39b72e46ad06b2f377326508ae"} Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.531055 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.531756 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.531783 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.531795 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:06 crc kubenswrapper[5099]: E1212 15:21:06.531994 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.540165 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"020d40e58135ecdec57aa9111436c88b85775324c180980117e6b41f0e1249a7"} Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.544302 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"261826a2e5b8909122df752fe1f4ad82d30626fd2d53ea720df5e71448d34d14"} Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.544377 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.552638 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.552704 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:06 crc kubenswrapper[5099]: I1212 15:21:06.552715 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:06 crc kubenswrapper[5099]: E1212 15:21:06.553042 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.552157 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"dbd6070b9997fbf922299d2d5de614331d2859a1c879dcd67076483f83acb766"} Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.552209 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8b8053f468d6ef24360493fcb9938815bf9bd3de045ea4f1e944aac89a2a7f2f"} Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.569287 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9dd1495d2e8782041c8b297544c144ad7c36068b9b6b45168bfae257103e4131"} Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.569473 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.576312 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.576384 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.576395 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:07 crc kubenswrapper[5099]: E1212 15:21:07.576655 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.580168 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e83bfe0224eb5c69f23132252eaba9bc3bd0dac62e3a19cf0262162b4c627b2e"} Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.580213 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a08390e968f9efeb59c99fd00241be16290395dce5767e17f0e950a1770db419"} Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.580319 5099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.580370 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.580962 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.581000 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:07 crc kubenswrapper[5099]: I1212 15:21:07.581012 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:07 crc kubenswrapper[5099]: E1212 15:21:07.581322 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.442167 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.586147 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b0eb801dce0dfdca8c998ce3b3e2ed9575a4d66a116287d1cd279d5a0efbe3d6"} Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.591134 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.591574 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.591740 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b7149b84d00d7f7a5796fdfe1d71f1e9c215c01a4e9b24391a95d27227d08525"} Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592010 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592148 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592196 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592243 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592254 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592273 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592282 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:08 crc kubenswrapper[5099]: E1212 15:21:08.592673 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:08 crc kubenswrapper[5099]: E1212 15:21:08.592787 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592853 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592917 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.592945 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:08 crc kubenswrapper[5099]: E1212 15:21:08.593558 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.603584 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.628507 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.879224 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.880224 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.880288 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.880304 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:08 crc kubenswrapper[5099]: I1212 15:21:08.880359 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.157450 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.600528 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8b13829365333f1323547984f09137d936b7e50c3b6bac449d39dd606bee7f63"} Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.600627 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4bb03b3f1c6d08ad647cd4706c0a9933059333767708a3d384626a4d0af666dc"} Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.600711 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.600816 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.600873 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.600957 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601637 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601693 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601707 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601646 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601757 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601778 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601792 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601865 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:09 crc kubenswrapper[5099]: I1212 15:21:09.601892 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:09 crc kubenswrapper[5099]: E1212 15:21:09.602159 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:09 crc kubenswrapper[5099]: E1212 15:21:09.602607 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:09 crc kubenswrapper[5099]: E1212 15:21:09.602739 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.586481 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.594331 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.603218 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.603295 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.603432 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604166 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604230 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604244 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604258 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604284 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604295 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604580 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604823 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:10 crc kubenswrapper[5099]: I1212 15:21:10.604835 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:10 crc kubenswrapper[5099]: E1212 15:21:10.604957 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:10 crc kubenswrapper[5099]: E1212 15:21:10.605273 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:10 crc kubenswrapper[5099]: E1212 15:21:10.607688 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:11 crc kubenswrapper[5099]: I1212 15:21:11.281632 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:11 crc kubenswrapper[5099]: I1212 15:21:11.400560 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:11 crc kubenswrapper[5099]: I1212 15:21:11.605782 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:11 crc kubenswrapper[5099]: I1212 15:21:11.606407 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:11 crc kubenswrapper[5099]: I1212 15:21:11.606441 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:11 crc kubenswrapper[5099]: I1212 15:21:11.606454 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:11 crc kubenswrapper[5099]: E1212 15:21:11.606799 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:12 crc kubenswrapper[5099]: E1212 15:21:12.511018 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:21:12 crc kubenswrapper[5099]: I1212 15:21:12.608209 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:12 crc kubenswrapper[5099]: I1212 15:21:12.609012 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:12 crc kubenswrapper[5099]: I1212 15:21:12.609046 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:12 crc kubenswrapper[5099]: I1212 15:21:12.609057 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:12 crc kubenswrapper[5099]: E1212 15:21:12.609343 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.018182 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.018448 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.019463 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.019521 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.019535 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:13 crc kubenswrapper[5099]: E1212 15:21:13.019953 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.835607 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.836631 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.837852 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.837916 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:13 crc kubenswrapper[5099]: I1212 15:21:13.837934 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:13 crc kubenswrapper[5099]: E1212 15:21:13.838587 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:14 crc kubenswrapper[5099]: I1212 15:21:14.282238 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:21:14 crc kubenswrapper[5099]: I1212 15:21:14.282339 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:21:14 crc kubenswrapper[5099]: I1212 15:21:14.307197 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 15:21:14 crc kubenswrapper[5099]: I1212 15:21:14.613377 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:14 crc kubenswrapper[5099]: I1212 15:21:14.614442 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:14 crc kubenswrapper[5099]: I1212 15:21:14.614510 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:14 crc kubenswrapper[5099]: I1212 15:21:14.614539 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:14 crc kubenswrapper[5099]: E1212 15:21:14.615015 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:17 crc kubenswrapper[5099]: I1212 15:21:17.348203 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 12 15:21:18 crc kubenswrapper[5099]: E1212 15:21:18.603576 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Dec 12 15:21:18 crc kubenswrapper[5099]: E1212 15:21:18.605853 5099 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 15:21:18 crc kubenswrapper[5099]: E1212 15:21:18.882261 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 12 15:21:19 crc kubenswrapper[5099]: I1212 15:21:19.448186 5099 trace.go:236] Trace[193822823]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:21:09.446) (total time: 10001ms): Dec 12 15:21:19 crc kubenswrapper[5099]: Trace[193822823]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:21:19.448) Dec 12 15:21:19 crc kubenswrapper[5099]: Trace[193822823]: [10.001882086s] [10.001882086s] END Dec 12 15:21:19 crc kubenswrapper[5099]: E1212 15:21:19.448243 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:21:19 crc kubenswrapper[5099]: I1212 15:21:19.776628 5099 trace.go:236] Trace[119262882]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:21:09.775) (total time: 10001ms): Dec 12 15:21:19 crc kubenswrapper[5099]: Trace[119262882]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:21:19.776) Dec 12 15:21:19 crc kubenswrapper[5099]: Trace[119262882]: [10.001060145s] [10.001060145s] END Dec 12 15:21:19 crc kubenswrapper[5099]: E1212 15:21:19.776729 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:21:19 crc kubenswrapper[5099]: I1212 15:21:19.951104 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:21:19 crc kubenswrapper[5099]: I1212 15:21:19.951271 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 15:21:19 crc kubenswrapper[5099]: I1212 15:21:19.963919 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:21:19 crc kubenswrapper[5099]: I1212 15:21:19.963986 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 15:21:22 crc kubenswrapper[5099]: E1212 15:21:22.511512 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:21:22 crc kubenswrapper[5099]: I1212 15:21:22.613539 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:22 crc kubenswrapper[5099]: I1212 15:21:22.613956 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:22 crc kubenswrapper[5099]: I1212 15:21:22.615467 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:22 crc kubenswrapper[5099]: I1212 15:21:22.615518 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:22 crc kubenswrapper[5099]: I1212 15:21:22.615543 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:22 crc kubenswrapper[5099]: E1212 15:21:22.616003 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.025988 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.026207 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.027043 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.027094 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.027113 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:23 crc kubenswrapper[5099]: E1212 15:21:23.027486 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.032368 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.642598 5099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.642704 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.643550 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.643620 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:23 crc kubenswrapper[5099]: I1212 15:21:23.643641 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:23 crc kubenswrapper[5099]: E1212 15:21:23.644406 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.283500 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.283625 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.334115 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.334503 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.335471 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.335538 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.335550 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.336113 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.348646 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.645614 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.646471 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.646512 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.646530 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.647175 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.954825 5099 trace.go:236] Trace[620534775]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:21:10.088) (total time: 14866ms): Dec 12 15:21:24 crc kubenswrapper[5099]: Trace[620534775]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 14866ms (15:21:24.954) Dec 12 15:21:24 crc kubenswrapper[5099]: Trace[620534775]: [14.866410118s] [14.866410118s] END Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.954894 5099 trace.go:236] Trace[641603709]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:21:10.152) (total time: 14802ms): Dec 12 15:21:24 crc kubenswrapper[5099]: Trace[641603709]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14802ms (15:21:24.954) Dec 12 15:21:24 crc kubenswrapper[5099]: Trace[641603709]: [14.802361539s] [14.802361539s] END Dec 12 15:21:24 crc kubenswrapper[5099]: I1212 15:21:24.954961 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.954964 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.954897 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.955203 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f959178adf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.359161567 +0000 UTC m=+0.463070208,LastTimestamp:2025-12-12 15:21:02.359161567 +0000 UTC m=+0.463070208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.961601 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8b5c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,LastTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.966746 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c184d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,LastTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.973576 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c4b29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450699049 +0000 UTC m=+0.554607690,LastTimestamp:2025-12-12 15:21:02.450699049 +0000 UTC m=+0.554607690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.980193 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f961fd9ca9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.508457129 +0000 UTC m=+0.612365770,LastTimestamp:2025-12-12 15:21:02.508457129 +0000 UTC m=+0.612365770,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.988890 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8b5c0e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8b5c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,LastTimestamp:2025-12-12 15:21:02.567240896 +0000 UTC m=+0.671149557,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:24 crc kubenswrapper[5099]: E1212 15:21:24.995808 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c184d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c184d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,LastTimestamp:2025-12-12 15:21:02.567486 +0000 UTC m=+0.671394651,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.000493 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c4b29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c4b29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450699049 +0000 UTC m=+0.554607690,LastTimestamp:2025-12-12 15:21:02.567497981 +0000 UTC m=+0.671406632,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.007179 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.007204 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8b5c0e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8b5c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,LastTimestamp:2025-12-12 15:21:02.570048943 +0000 UTC m=+0.673957584,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.018348 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c184d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c184d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,LastTimestamp:2025-12-12 15:21:02.570073823 +0000 UTC m=+0.673982464,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.026426 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c4b29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c4b29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450699049 +0000 UTC m=+0.554607690,LastTimestamp:2025-12-12 15:21:02.570088434 +0000 UTC m=+0.673997075,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.032941 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8b5c0e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8b5c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,LastTimestamp:2025-12-12 15:21:02.570506291 +0000 UTC m=+0.674414932,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.037483 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c184d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c184d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,LastTimestamp:2025-12-12 15:21:02.570545481 +0000 UTC m=+0.674454112,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.043515 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c4b29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c4b29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450699049 +0000 UTC m=+0.554607690,LastTimestamp:2025-12-12 15:21:02.570600472 +0000 UTC m=+0.674509113,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.049801 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8b5c0e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8b5c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,LastTimestamp:2025-12-12 15:21:02.571889244 +0000 UTC m=+0.675797885,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.056088 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c184d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c184d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,LastTimestamp:2025-12-12 15:21:02.571905544 +0000 UTC m=+0.675814185,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.065197 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c4b29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c4b29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450699049 +0000 UTC m=+0.554607690,LastTimestamp:2025-12-12 15:21:02.571916394 +0000 UTC m=+0.675825035,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.069004 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34232->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.069090 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34232->192.168.126.11:17697: read: connection reset by peer" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.069480 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34246->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.069512 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34246->192.168.126.11:17697: read: connection reset by peer" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.069886 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.069935 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.069701 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8b5c0e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8b5c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,LastTimestamp:2025-12-12 15:21:02.572016446 +0000 UTC m=+0.675925087,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.077503 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c184d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c184d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,LastTimestamp:2025-12-12 15:21:02.572029046 +0000 UTC m=+0.675937677,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.085737 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c4b29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c4b29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450699049 +0000 UTC m=+0.554607690,LastTimestamp:2025-12-12 15:21:02.572037936 +0000 UTC m=+0.675946567,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.090701 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8b5c0e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8b5c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,LastTimestamp:2025-12-12 15:21:02.573542741 +0000 UTC m=+0.677451372,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.095942 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c184d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c184d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,LastTimestamp:2025-12-12 15:21:02.573555031 +0000 UTC m=+0.677463662,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.100233 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c4b29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c4b29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450699049 +0000 UTC m=+0.554607690,LastTimestamp:2025-12-12 15:21:02.573581692 +0000 UTC m=+0.677490333,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.105012 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8b5c0e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8b5c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450637838 +0000 UTC m=+0.554546479,LastTimestamp:2025-12-12 15:21:02.573742754 +0000 UTC m=+0.677651395,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.109333 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080f95e8c184d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080f95e8c184d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.450686029 +0000 UTC m=+0.554594670,LastTimestamp:2025-12-12 15:21:02.573795155 +0000 UTC m=+0.677703796,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.118216 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080f97c6b0a9f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.951836319 +0000 UTC m=+1.055744960,LastTimestamp:2025-12-12 15:21:02.951836319 +0000 UTC m=+1.055744960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.122321 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080f97c6cff05 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.951964421 +0000 UTC m=+1.055873062,LastTimestamp:2025-12-12 15:21:02.951964421 +0000 UTC m=+1.055873062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.126854 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080f97d1d8589 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.963533193 +0000 UTC m=+1.067441834,LastTimestamp:2025-12-12 15:21:02.963533193 +0000 UTC m=+1.067441834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.131926 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080f97de49270 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.97657816 +0000 UTC m=+1.080486801,LastTimestamp:2025-12-12 15:21:02.97657816 +0000 UTC m=+1.080486801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.136817 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080f97e66a3fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:02.985102332 +0000 UTC m=+1.089010973,LastTimestamp:2025-12-12 15:21:02.985102332 +0000 UTC m=+1.089010973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.142334 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080f9b689155d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:03.926883677 +0000 UTC m=+2.030792358,LastTimestamp:2025-12-12 15:21:03.926883677 +0000 UTC m=+2.030792358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.147292 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080f9b69569b0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:03.927691696 +0000 UTC m=+2.031600337,LastTimestamp:2025-12-12 15:21:03.927691696 +0000 UTC m=+2.031600337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.151773 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080f9b6960d53 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:03.927733587 +0000 UTC m=+2.031642228,LastTimestamp:2025-12-12 15:21:03.927733587 +0000 UTC m=+2.031642228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.156729 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080f9b696d00f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:03.927783439 +0000 UTC m=+2.031692070,LastTimestamp:2025-12-12 15:21:03.927783439 +0000 UTC m=+2.031692070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.161531 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080f9b696df73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:03.927787379 +0000 UTC m=+2.031696060,LastTimestamp:2025-12-12 15:21:03.927787379 +0000 UTC m=+2.031696060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.165888 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080f9b8b36a1d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:03.963212317 +0000 UTC m=+2.067120968,LastTimestamp:2025-12-12 15:21:03.963212317 +0000 UTC m=+2.067120968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.170896 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080f9baddfe1b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:03.999557147 +0000 UTC m=+2.103465788,LastTimestamp:2025-12-12 15:21:03.999557147 +0000 UTC m=+2.103465788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.175992 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080f9bae38b96 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:03.999921046 +0000 UTC m=+2.103829687,LastTimestamp:2025-12-12 15:21:03.999921046 +0000 UTC m=+2.103829687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.180318 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080f9bae684d1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.000115921 +0000 UTC m=+2.104024572,LastTimestamp:2025-12-12 15:21:04.000115921 +0000 UTC m=+2.104024572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.184747 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080f9baec8628 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.00050948 +0000 UTC m=+2.104418121,LastTimestamp:2025-12-12 15:21:04.00050948 +0000 UTC m=+2.104418121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.189887 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080f9baf499b5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.001038773 +0000 UTC m=+2.104947424,LastTimestamp:2025-12-12 15:21:04.001038773 +0000 UTC m=+2.104947424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.196747 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080f9d7d505f0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.485508592 +0000 UTC m=+2.589417233,LastTimestamp:2025-12-12 15:21:04.485508592 +0000 UTC m=+2.589417233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.201907 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080f9d7fef686 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.488257158 +0000 UTC m=+2.592165819,LastTimestamp:2025-12-12 15:21:04.488257158 +0000 UTC m=+2.592165819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.206209 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080f9d813a86c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.48961342 +0000 UTC m=+2.593522061,LastTimestamp:2025-12-12 15:21:04.48961342 +0000 UTC m=+2.593522061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.210276 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080f9d83892ec openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.492032748 +0000 UTC m=+2.595941429,LastTimestamp:2025-12-12 15:21:04.492032748 +0000 UTC m=+2.595941429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.216121 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080f9db4bbe0f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.543620623 +0000 UTC m=+2.647529264,LastTimestamp:2025-12-12 15:21:04.543620623 +0000 UTC m=+2.647529264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.221045 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080f9ea5adf61 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.796270433 +0000 UTC m=+2.900179074,LastTimestamp:2025-12-12 15:21:04.796270433 +0000 UTC m=+2.900179074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.225794 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080f9ea6ddb2e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:04.797514542 +0000 UTC m=+2.901423203,LastTimestamp:2025-12-12 15:21:04.797514542 +0000 UTC m=+2.901423203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.352526 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080f9f920d991 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.044126097 +0000 UTC m=+3.148034738,LastTimestamp:2025-12-12 15:21:05.044126097 +0000 UTC m=+3.148034738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.364360 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.431667 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.431733 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.431745 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.431772 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.434348 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080f9f927f664 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.044592228 +0000 UTC m=+3.148500869,LastTimestamp:2025-12-12 15:21:05.044592228 +0000 UTC m=+3.148500869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.434569 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.437364 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080f9f92cc1f6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.044906486 +0000 UTC m=+3.148815127,LastTimestamp:2025-12-12 15:21:05.044906486 +0000 UTC m=+3.148815127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.438019 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.440805 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080f9f9388ac0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.045678784 +0000 UTC m=+3.149587425,LastTimestamp:2025-12-12 15:21:05.045678784 +0000 UTC m=+3.149587425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.443793 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080f9f9dff259 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.056649817 +0000 UTC m=+3.160558458,LastTimestamp:2025-12-12 15:21:05.056649817 +0000 UTC m=+3.160558458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.446336 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080f9f9fca886 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.058531462 +0000 UTC m=+3.162440093,LastTimestamp:2025-12-12 15:21:05.058531462 +0000 UTC m=+3.162440093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.450655 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080f9fa9dd407 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.069093895 +0000 UTC m=+3.173002536,LastTimestamp:2025-12-12 15:21:05.069093895 +0000 UTC m=+3.173002536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.457584 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080f9fad56f90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.072738192 +0000 UTC m=+3.176646833,LastTimestamp:2025-12-12 15:21:05.072738192 +0000 UTC m=+3.176646833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.461977 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080f9fbe14027 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.090289703 +0000 UTC m=+3.194198354,LastTimestamp:2025-12-12 15:21:05.090289703 +0000 UTC m=+3.194198354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.465466 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080f9fc62701d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.098756125 +0000 UTC m=+3.202664766,LastTimestamp:2025-12-12 15:21:05.098756125 +0000 UTC m=+3.202664766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.470084 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080fa0e78e96d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.402218861 +0000 UTC m=+3.506127502,LastTimestamp:2025-12-12 15:21:05.402218861 +0000 UTC m=+3.506127502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.474787 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080fa139525ae openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.487955374 +0000 UTC m=+3.591864015,LastTimestamp:2025-12-12 15:21:05.487955374 +0000 UTC m=+3.591864015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.479279 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080fa13a4fe25 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.488993829 +0000 UTC m=+3.592902470,LastTimestamp:2025-12-12 15:21:05.488993829 +0000 UTC m=+3.592902470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.486161 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa15221b4b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.513970507 +0000 UTC m=+3.617879158,LastTimestamp:2025-12-12 15:21:05.513970507 +0000 UTC m=+3.617879158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.494058 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080fa306a77b5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.971697589 +0000 UTC m=+4.075606230,LastTimestamp:2025-12-12 15:21:05.971697589 +0000 UTC m=+4.075606230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.498397 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa30d3917a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.978585466 +0000 UTC m=+4.082494107,LastTimestamp:2025-12-12 15:21:05.978585466 +0000 UTC m=+4.082494107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.505584 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080fa316001d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:05.987789268 +0000 UTC m=+4.091697909,LastTimestamp:2025-12-12 15:21:05.987789268 +0000 UTC m=+4.091697909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.518924 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa32557880 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:06.003875968 +0000 UTC m=+4.107784619,LastTimestamp:2025-12-12 15:21:06.003875968 +0000 UTC m=+4.107784619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.525795 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa4cb14146 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:06.446098758 +0000 UTC m=+4.550007409,LastTimestamp:2025-12-12 15:21:06.446098758 +0000 UTC m=+4.550007409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.530023 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080fa4cb79c8a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:06.446515338 +0000 UTC m=+4.550423989,LastTimestamp:2025-12-12 15:21:06.446515338 +0000 UTC m=+4.550423989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.536386 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa4d9d6ade openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:06.461575902 +0000 UTC m=+4.565484543,LastTimestamp:2025-12-12 15:21:06.461575902 +0000 UTC m=+4.565484543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.541177 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080fa4da589a1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:06.462108065 +0000 UTC m=+4.566016706,LastTimestamp:2025-12-12 15:21:06.462108065 +0000 UTC m=+4.566016706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.546698 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa4db28df0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:06.462961136 +0000 UTC m=+4.566869777,LastTimestamp:2025-12-12 15:21:06.462961136 +0000 UTC m=+4.566869777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.551954 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080fa4dd147f3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:06.464974835 +0000 UTC m=+4.568883476,LastTimestamp:2025-12-12 15:21:06.464974835 +0000 UTC m=+4.568883476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.559184 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa51895538 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:06.527368504 +0000 UTC m=+4.631277145,LastTimestamp:2025-12-12 15:21:06.527368504 +0000 UTC m=+4.631277145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.565257 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080fa703f4971 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.042609521 +0000 UTC m=+5.146518162,LastTimestamp:2025-12-12 15:21:07.042609521 +0000 UTC m=+5.146518162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.570032 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa7042abcb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.042831307 +0000 UTC m=+5.146739948,LastTimestamp:2025-12-12 15:21:07.042831307 +0000 UTC m=+5.146739948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.574556 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa70465258 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.043070552 +0000 UTC m=+5.146979203,LastTimestamp:2025-12-12 15:21:07.043070552 +0000 UTC m=+5.146979203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.578959 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa70faee1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.054906909 +0000 UTC m=+5.158815550,LastTimestamp:2025-12-12 15:21:07.054906909 +0000 UTC m=+5.158815550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.583137 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa710c76b3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.056055987 +0000 UTC m=+5.159964628,LastTimestamp:2025-12-12 15:21:07.056055987 +0000 UTC m=+5.159964628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.587178 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080fa762c8791 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.142043537 +0000 UTC m=+5.245952178,LastTimestamp:2025-12-12 15:21:07.142043537 +0000 UTC m=+5.245952178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.592970 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa774524bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.160433852 +0000 UTC m=+5.264342493,LastTimestamp:2025-12-12 15:21:07.160433852 +0000 UTC m=+5.264342493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.599128 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa77615be1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.162282977 +0000 UTC m=+5.266191618,LastTimestamp:2025-12-12 15:21:07.162282977 +0000 UTC m=+5.266191618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.604101 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa83460d15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.361819925 +0000 UTC m=+5.465728566,LastTimestamp:2025-12-12 15:21:07.361819925 +0000 UTC m=+5.465728566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.608771 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa855df67b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.396941435 +0000 UTC m=+5.500850076,LastTimestamp:2025-12-12 15:21:07.396941435 +0000 UTC m=+5.500850076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.614239 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa856ef5a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.398055332 +0000 UTC m=+5.501963983,LastTimestamp:2025-12-12 15:21:07.398055332 +0000 UTC m=+5.501963983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.619877 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa85a4d8bd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.401586877 +0000 UTC m=+5.505495508,LastTimestamp:2025-12-12 15:21:07.401586877 +0000 UTC m=+5.505495508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.626118 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa89a122e9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.468452585 +0000 UTC m=+5.572361226,LastTimestamp:2025-12-12 15:21:07.468452585 +0000 UTC m=+5.572361226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.630998 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fa89b5ecf4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.469815028 +0000 UTC m=+5.573723669,LastTimestamp:2025-12-12 15:21:07.469815028 +0000 UTC m=+5.573723669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.636489 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fab49ae55c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.1894639 +0000 UTC m=+6.293372541,LastTimestamp:2025-12-12 15:21:08.1894639 +0000 UTC m=+6.293372541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.640513 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fab5b275ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.207785453 +0000 UTC m=+6.311694094,LastTimestamp:2025-12-12 15:21:08.207785453 +0000 UTC m=+6.311694094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.644280 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fab65249a5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.218259877 +0000 UTC m=+6.322168518,LastTimestamp:2025-12-12 15:21:08.218259877 +0000 UTC m=+6.322168518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.645421 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fab76bf70b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.236719883 +0000 UTC m=+6.340628544,LastTimestamp:2025-12-12 15:21:08.236719883 +0000 UTC m=+6.340628544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.650875 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.651155 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fab780a0a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.238074016 +0000 UTC m=+6.341982647,LastTimestamp:2025-12-12 15:21:08.238074016 +0000 UTC m=+6.341982647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.655916 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b7149b84d00d7f7a5796fdfe1d71f1e9c215c01a4e9b24391a95d27227d08525" exitCode=255 Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.655973 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b7149b84d00d7f7a5796fdfe1d71f1e9c215c01a4e9b24391a95d27227d08525"} Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.656192 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.656702 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.656755 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.656768 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.657180 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:25 crc kubenswrapper[5099]: I1212 15:21:25.657471 5099 scope.go:117] "RemoveContainer" containerID="b7149b84d00d7f7a5796fdfe1d71f1e9c215c01a4e9b24391a95d27227d08525" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.658569 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fadc1804ad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.851975341 +0000 UTC m=+6.955883982,LastTimestamp:2025-12-12 15:21:08.851975341 +0000 UTC m=+6.955883982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.678814 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fadd0b6e55 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.867927637 +0000 UTC m=+6.971836278,LastTimestamp:2025-12-12 15:21:08.867927637 +0000 UTC m=+6.971836278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.687217 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080fadd2183a2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.869374882 +0000 UTC m=+6.973283523,LastTimestamp:2025-12-12 15:21:08.869374882 +0000 UTC m=+6.973283523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.700956 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080faf44d0a2b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:09.258103339 +0000 UTC m=+7.362011980,LastTimestamp:2025-12-12 15:21:09.258103339 +0000 UTC m=+7.362011980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.706401 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080faf520092f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:09.271931183 +0000 UTC m=+7.375839824,LastTimestamp:2025-12-12 15:21:09.271931183 +0000 UTC m=+7.375839824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.731389 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 15:21:25 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-controller-manager-crc.188080fc1fc44640 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 12 15:21:25 crc kubenswrapper[5099]: body: Dec 12 15:21:25 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:14.282305088 +0000 UTC m=+12.386213749,LastTimestamp:2025-12-12 15:21:14.282305088 +0000 UTC m=+12.386213749,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:21:25 crc kubenswrapper[5099]: > Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.741311 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080fc1fc63a2f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:14.282433071 +0000 UTC m=+12.386341722,LastTimestamp:2025-12-12 15:21:14.282433071 +0000 UTC m=+12.386341722,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.747141 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:21:25 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188080fd71a92ae5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 15:21:25 crc kubenswrapper[5099]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:21:25 crc kubenswrapper[5099]: Dec 12 15:21:25 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:19.951227621 +0000 UTC m=+18.055136272,LastTimestamp:2025-12-12 15:21:19.951227621 +0000 UTC m=+18.055136272,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:21:25 crc kubenswrapper[5099]: > Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.758563 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fd71aac7a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:19.951333283 +0000 UTC m=+18.055241924,LastTimestamp:2025-12-12 15:21:19.951333283 +0000 UTC m=+18.055241924,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.768039 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080fd71a92ae5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:21:25 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188080fd71a92ae5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 15:21:25 crc kubenswrapper[5099]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:21:25 crc kubenswrapper[5099]: Dec 12 15:21:25 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:19.951227621 +0000 UTC m=+18.055136272,LastTimestamp:2025-12-12 15:21:19.963962109 +0000 UTC m=+18.067870750,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:21:25 crc kubenswrapper[5099]: > Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.772238 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080fd71aac7a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fd71aac7a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:19.951333283 +0000 UTC m=+18.055241924,LastTimestamp:2025-12-12 15:21:19.96401714 +0000 UTC m=+18.067925801,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.778867 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.188080fc1fc44640\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 15:21:25 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-controller-manager-crc.188080fc1fc44640 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 12 15:21:25 crc kubenswrapper[5099]: body: Dec 12 15:21:25 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:14.282305088 +0000 UTC m=+12.386213749,LastTimestamp:2025-12-12 15:21:24.283591078 +0000 UTC m=+22.387499739,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:21:25 crc kubenswrapper[5099]: > Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.783563 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.188080fc1fc63a2f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080fc1fc63a2f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:14.282433071 +0000 UTC m=+12.386341722,LastTimestamp:2025-12-12 15:21:24.28365749 +0000 UTC m=+22.387566131,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.790189 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:21:25 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188080fea2b51f60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:34232->192.168.126.11:17697: read: connection reset by peer Dec 12 15:21:25 crc kubenswrapper[5099]: body: Dec 12 15:21:25 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:25.069061984 +0000 UTC m=+23.172970625,LastTimestamp:2025-12-12 15:21:25.069061984 +0000 UTC m=+23.172970625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:21:25 crc kubenswrapper[5099]: > Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.797871 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fea2b5ee9b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34232->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:25.069115035 +0000 UTC m=+23.173023676,LastTimestamp:2025-12-12 15:21:25.069115035 +0000 UTC m=+23.173023676,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.803451 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:21:25 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188080fea2bbdaee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:34246->192.168.126.11:17697: read: connection reset by peer Dec 12 15:21:25 crc kubenswrapper[5099]: body: Dec 12 15:21:25 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:25.069503214 +0000 UTC m=+23.173411855,LastTimestamp:2025-12-12 15:21:25.069503214 +0000 UTC m=+23.173411855,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:21:25 crc kubenswrapper[5099]: > Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.808995 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fea2bc2e9b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34246->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:25.069524635 +0000 UTC m=+23.173433276,LastTimestamp:2025-12-12 15:21:25.069524635 +0000 UTC m=+23.173433276,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.813655 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:21:25 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188080fea2c22584 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 12 15:21:25 crc kubenswrapper[5099]: body: Dec 12 15:21:25 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:25.069915524 +0000 UTC m=+23.173824165,LastTimestamp:2025-12-12 15:21:25.069915524 +0000 UTC m=+23.173824165,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:21:25 crc kubenswrapper[5099]: > Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.826149 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fea2c2bbb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:25.069953975 +0000 UTC m=+23.173862616,LastTimestamp:2025-12-12 15:21:25.069953975 +0000 UTC m=+23.173862616,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.831648 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080fa856ef5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa856ef5a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.398055332 +0000 UTC m=+5.501963983,LastTimestamp:2025-12-12 15:21:25.659007677 +0000 UTC m=+23.762916318,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.970736 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080fab49ae55c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fab49ae55c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.1894639 +0000 UTC m=+6.293372541,LastTimestamp:2025-12-12 15:21:25.969355417 +0000 UTC m=+24.073264058,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:25 crc kubenswrapper[5099]: E1212 15:21:25.997619 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080fab5b275ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fab5b275ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.207785453 +0000 UTC m=+6.311694094,LastTimestamp:2025-12-12 15:21:25.988450409 +0000 UTC m=+24.092359050,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.352699 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.661685 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.663786 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"25521c5b84a9d82a9d3ffdb5025ff7326ab50a48d23e35af1b49e90af2f66679"} Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.664099 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.664818 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.664859 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.664875 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:26 crc kubenswrapper[5099]: E1212 15:21:26.665266 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.921447 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:21:26 crc kubenswrapper[5099]: I1212 15:21:26.945545 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:21:27 crc kubenswrapper[5099]: E1212 15:21:27.069740 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.350441 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.668297 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.668901 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.670826 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="25521c5b84a9d82a9d3ffdb5025ff7326ab50a48d23e35af1b49e90af2f66679" exitCode=255 Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.670902 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"25521c5b84a9d82a9d3ffdb5025ff7326ab50a48d23e35af1b49e90af2f66679"} Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.670977 5099 scope.go:117] "RemoveContainer" containerID="b7149b84d00d7f7a5796fdfe1d71f1e9c215c01a4e9b24391a95d27227d08525" Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.671144 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.671793 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.671883 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.671897 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:27 crc kubenswrapper[5099]: E1212 15:21:27.674309 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:27 crc kubenswrapper[5099]: I1212 15:21:27.674557 5099 scope.go:117] "RemoveContainer" containerID="25521c5b84a9d82a9d3ffdb5025ff7326ab50a48d23e35af1b49e90af2f66679" Dec 12 15:21:27 crc kubenswrapper[5099]: E1212 15:21:27.674799 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:27 crc kubenswrapper[5099]: E1212 15:21:27.679501 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ff3e04bfbd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:27.674748861 +0000 UTC m=+25.778657492,LastTimestamp:2025-12-12 15:21:27.674748861 +0000 UTC m=+25.778657492,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:28 crc kubenswrapper[5099]: I1212 15:21:28.352175 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:28 crc kubenswrapper[5099]: I1212 15:21:28.675372 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 15:21:29 crc kubenswrapper[5099]: I1212 15:21:29.351593 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:30 crc kubenswrapper[5099]: I1212 15:21:30.357648 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.288648 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.288966 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.290623 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.290731 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.290757 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:31 crc kubenswrapper[5099]: E1212 15:21:31.291453 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.295059 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.353258 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.389818 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.390271 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.391695 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.391766 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.391787 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:31 crc kubenswrapper[5099]: E1212 15:21:31.392472 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.392870 5099 scope.go:117] "RemoveContainer" containerID="25521c5b84a9d82a9d3ffdb5025ff7326ab50a48d23e35af1b49e90af2f66679" Dec 12 15:21:31 crc kubenswrapper[5099]: E1212 15:21:31.393190 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:31 crc kubenswrapper[5099]: E1212 15:21:31.413746 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ff3e04bfbd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ff3e04bfbd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:27.674748861 +0000 UTC m=+25.778657492,LastTimestamp:2025-12-12 15:21:31.393145633 +0000 UTC m=+29.497054284,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.684262 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.685007 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.685086 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:31 crc kubenswrapper[5099]: I1212 15:21:31.685107 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:31 crc kubenswrapper[5099]: E1212 15:21:31.685736 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:32 crc kubenswrapper[5099]: E1212 15:21:32.012763 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:21:32 crc kubenswrapper[5099]: I1212 15:21:32.353501 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:32 crc kubenswrapper[5099]: E1212 15:21:32.373193 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:21:32 crc kubenswrapper[5099]: I1212 15:21:32.438732 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:32 crc kubenswrapper[5099]: I1212 15:21:32.439931 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:32 crc kubenswrapper[5099]: I1212 15:21:32.439973 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:32 crc kubenswrapper[5099]: I1212 15:21:32.439989 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:32 crc kubenswrapper[5099]: I1212 15:21:32.440014 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:32 crc kubenswrapper[5099]: E1212 15:21:32.600937 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:21:32 crc kubenswrapper[5099]: E1212 15:21:32.604625 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:21:33 crc kubenswrapper[5099]: E1212 15:21:33.149385 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:21:33 crc kubenswrapper[5099]: I1212 15:21:33.350285 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:34 crc kubenswrapper[5099]: I1212 15:21:34.355485 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:35 crc kubenswrapper[5099]: I1212 15:21:35.350441 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:36 crc kubenswrapper[5099]: I1212 15:21:36.405096 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:36 crc kubenswrapper[5099]: I1212 15:21:36.665162 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:36 crc kubenswrapper[5099]: I1212 15:21:36.665628 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:36 crc kubenswrapper[5099]: I1212 15:21:36.667166 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:36 crc kubenswrapper[5099]: I1212 15:21:36.667577 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:36 crc kubenswrapper[5099]: I1212 15:21:36.667758 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:36 crc kubenswrapper[5099]: E1212 15:21:36.668878 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:36 crc kubenswrapper[5099]: I1212 15:21:36.669418 5099 scope.go:117] "RemoveContainer" containerID="25521c5b84a9d82a9d3ffdb5025ff7326ab50a48d23e35af1b49e90af2f66679" Dec 12 15:21:36 crc kubenswrapper[5099]: E1212 15:21:36.669908 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:36 crc kubenswrapper[5099]: E1212 15:21:36.678960 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ff3e04bfbd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ff3e04bfbd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:27.674748861 +0000 UTC m=+25.778657492,LastTimestamp:2025-12-12 15:21:36.669846688 +0000 UTC m=+34.773755359,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:37 crc kubenswrapper[5099]: E1212 15:21:37.060511 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:21:37 crc kubenswrapper[5099]: I1212 15:21:37.356053 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:38 crc kubenswrapper[5099]: I1212 15:21:38.352776 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:39 crc kubenswrapper[5099]: E1212 15:21:39.020027 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:21:39 crc kubenswrapper[5099]: I1212 15:21:39.353047 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:39 crc kubenswrapper[5099]: I1212 15:21:39.604838 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:39 crc kubenswrapper[5099]: I1212 15:21:39.605952 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:39 crc kubenswrapper[5099]: I1212 15:21:39.606036 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:39 crc kubenswrapper[5099]: I1212 15:21:39.606054 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:39 crc kubenswrapper[5099]: I1212 15:21:39.606087 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:39 crc kubenswrapper[5099]: E1212 15:21:39.620813 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:21:40 crc kubenswrapper[5099]: I1212 15:21:40.353545 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:41 crc kubenswrapper[5099]: I1212 15:21:41.355051 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:41 crc kubenswrapper[5099]: E1212 15:21:41.592230 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:21:42 crc kubenswrapper[5099]: I1212 15:21:42.354776 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:42 crc kubenswrapper[5099]: E1212 15:21:42.754457 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:21:43 crc kubenswrapper[5099]: I1212 15:21:43.352082 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:44 crc kubenswrapper[5099]: I1212 15:21:44.350999 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:45 crc kubenswrapper[5099]: I1212 15:21:45.351753 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:46 crc kubenswrapper[5099]: E1212 15:21:46.030489 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:21:46 crc kubenswrapper[5099]: I1212 15:21:46.354189 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:46 crc kubenswrapper[5099]: I1212 15:21:46.622051 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:46 crc kubenswrapper[5099]: I1212 15:21:46.623886 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:46 crc kubenswrapper[5099]: I1212 15:21:46.623966 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:46 crc kubenswrapper[5099]: I1212 15:21:46.623981 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:46 crc kubenswrapper[5099]: I1212 15:21:46.624041 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:46 crc kubenswrapper[5099]: E1212 15:21:46.634757 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.352935 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.466408 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.467706 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.467784 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.467861 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:47 crc kubenswrapper[5099]: E1212 15:21:47.477733 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.478377 5099 scope.go:117] "RemoveContainer" containerID="25521c5b84a9d82a9d3ffdb5025ff7326ab50a48d23e35af1b49e90af2f66679" Dec 12 15:21:47 crc kubenswrapper[5099]: E1212 15:21:47.487340 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080fa856ef5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fa856ef5a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:07.398055332 +0000 UTC m=+5.501963983,LastTimestamp:2025-12-12 15:21:47.481477968 +0000 UTC m=+45.585386609,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:47 crc kubenswrapper[5099]: E1212 15:21:47.715470 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080fab49ae55c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fab49ae55c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.1894639 +0000 UTC m=+6.293372541,LastTimestamp:2025-12-12 15:21:47.709518416 +0000 UTC m=+45.813427067,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:47 crc kubenswrapper[5099]: E1212 15:21:47.728350 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080fab5b275ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080fab5b275ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:08.207785453 +0000 UTC m=+6.311694094,LastTimestamp:2025-12-12 15:21:47.726682831 +0000 UTC m=+45.830591482,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:47 crc kubenswrapper[5099]: E1212 15:21:47.746063 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.777499 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.780741 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d4d388861a573611518ba2d63cb05e9927bab8b17fffa8035029d80912f52d62"} Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.780975 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.782852 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.782935 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:47 crc kubenswrapper[5099]: I1212 15:21:47.782953 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:47 crc kubenswrapper[5099]: E1212 15:21:47.783652 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.352584 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.786051 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.786577 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.789184 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d4d388861a573611518ba2d63cb05e9927bab8b17fffa8035029d80912f52d62"} Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.789206 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d4d388861a573611518ba2d63cb05e9927bab8b17fffa8035029d80912f52d62" exitCode=255 Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.789260 5099 scope.go:117] "RemoveContainer" containerID="25521c5b84a9d82a9d3ffdb5025ff7326ab50a48d23e35af1b49e90af2f66679" Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.789768 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.794862 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.794912 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.794926 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:48 crc kubenswrapper[5099]: E1212 15:21:48.795341 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:48 crc kubenswrapper[5099]: I1212 15:21:48.795623 5099 scope.go:117] "RemoveContainer" containerID="d4d388861a573611518ba2d63cb05e9927bab8b17fffa8035029d80912f52d62" Dec 12 15:21:48 crc kubenswrapper[5099]: E1212 15:21:48.796003 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:48 crc kubenswrapper[5099]: E1212 15:21:48.801413 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ff3e04bfbd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ff3e04bfbd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:27.674748861 +0000 UTC m=+25.778657492,LastTimestamp:2025-12-12 15:21:48.795955474 +0000 UTC m=+46.899864115,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:49 crc kubenswrapper[5099]: I1212 15:21:49.352197 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:49 crc kubenswrapper[5099]: I1212 15:21:49.794934 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 15:21:50 crc kubenswrapper[5099]: I1212 15:21:50.350857 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:51 crc kubenswrapper[5099]: E1212 15:21:51.170406 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:21:51 crc kubenswrapper[5099]: I1212 15:21:51.351833 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:51 crc kubenswrapper[5099]: I1212 15:21:51.390073 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:51 crc kubenswrapper[5099]: I1212 15:21:51.390336 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:51 crc kubenswrapper[5099]: I1212 15:21:51.391303 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:51 crc kubenswrapper[5099]: I1212 15:21:51.391379 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:51 crc kubenswrapper[5099]: I1212 15:21:51.391390 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:51 crc kubenswrapper[5099]: E1212 15:21:51.391789 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:51 crc kubenswrapper[5099]: I1212 15:21:51.392045 5099 scope.go:117] "RemoveContainer" containerID="d4d388861a573611518ba2d63cb05e9927bab8b17fffa8035029d80912f52d62" Dec 12 15:21:51 crc kubenswrapper[5099]: E1212 15:21:51.392240 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:51 crc kubenswrapper[5099]: E1212 15:21:51.397603 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ff3e04bfbd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ff3e04bfbd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:27.674748861 +0000 UTC m=+25.778657492,LastTimestamp:2025-12-12 15:21:51.392208993 +0000 UTC m=+49.496117634,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:52 crc kubenswrapper[5099]: I1212 15:21:52.351558 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:52 crc kubenswrapper[5099]: E1212 15:21:52.755290 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:21:53 crc kubenswrapper[5099]: E1212 15:21:53.037350 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:21:53 crc kubenswrapper[5099]: I1212 15:21:53.351407 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:53 crc kubenswrapper[5099]: I1212 15:21:53.635800 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:53 crc kubenswrapper[5099]: I1212 15:21:53.637283 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:53 crc kubenswrapper[5099]: I1212 15:21:53.637350 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:53 crc kubenswrapper[5099]: I1212 15:21:53.637365 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:53 crc kubenswrapper[5099]: I1212 15:21:53.637400 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:21:53 crc kubenswrapper[5099]: E1212 15:21:53.647978 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:21:54 crc kubenswrapper[5099]: I1212 15:21:54.351791 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:55 crc kubenswrapper[5099]: I1212 15:21:55.352094 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:56 crc kubenswrapper[5099]: I1212 15:21:56.351212 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:57 crc kubenswrapper[5099]: I1212 15:21:57.352372 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:57 crc kubenswrapper[5099]: I1212 15:21:57.781540 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:57 crc kubenswrapper[5099]: I1212 15:21:57.781880 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:57 crc kubenswrapper[5099]: I1212 15:21:57.782884 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:57 crc kubenswrapper[5099]: I1212 15:21:57.783028 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:57 crc kubenswrapper[5099]: I1212 15:21:57.783049 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:57 crc kubenswrapper[5099]: E1212 15:21:57.783458 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:57 crc kubenswrapper[5099]: I1212 15:21:57.783757 5099 scope.go:117] "RemoveContainer" containerID="d4d388861a573611518ba2d63cb05e9927bab8b17fffa8035029d80912f52d62" Dec 12 15:21:57 crc kubenswrapper[5099]: E1212 15:21:57.784013 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:57 crc kubenswrapper[5099]: E1212 15:21:57.793847 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ff3e04bfbd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ff3e04bfbd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:21:27.674748861 +0000 UTC m=+25.778657492,LastTimestamp:2025-12-12 15:21:57.78397035 +0000 UTC m=+55.887879001,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:21:58 crc kubenswrapper[5099]: I1212 15:21:58.355101 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:21:58 crc kubenswrapper[5099]: I1212 15:21:58.598164 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:58 crc kubenswrapper[5099]: I1212 15:21:58.598379 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:21:58 crc kubenswrapper[5099]: I1212 15:21:58.599283 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:58 crc kubenswrapper[5099]: I1212 15:21:58.599315 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:58 crc kubenswrapper[5099]: I1212 15:21:58.599325 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:58 crc kubenswrapper[5099]: E1212 15:21:58.599842 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:21:59 crc kubenswrapper[5099]: I1212 15:21:59.351956 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:00 crc kubenswrapper[5099]: E1212 15:22:00.045998 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:22:00 crc kubenswrapper[5099]: I1212 15:22:00.351276 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:00 crc kubenswrapper[5099]: I1212 15:22:00.648854 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:00 crc kubenswrapper[5099]: I1212 15:22:00.649858 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:00 crc kubenswrapper[5099]: I1212 15:22:00.649896 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:00 crc kubenswrapper[5099]: I1212 15:22:00.649906 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:00 crc kubenswrapper[5099]: I1212 15:22:00.649927 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:22:00 crc kubenswrapper[5099]: E1212 15:22:00.658042 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:22:01 crc kubenswrapper[5099]: E1212 15:22:01.197182 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:22:01 crc kubenswrapper[5099]: I1212 15:22:01.352268 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:02 crc kubenswrapper[5099]: I1212 15:22:02.354241 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:02 crc kubenswrapper[5099]: E1212 15:22:02.756374 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:22:03 crc kubenswrapper[5099]: I1212 15:22:03.351581 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:04 crc kubenswrapper[5099]: I1212 15:22:04.353711 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:05 crc kubenswrapper[5099]: I1212 15:22:05.351608 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:06 crc kubenswrapper[5099]: I1212 15:22:06.352099 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:07 crc kubenswrapper[5099]: E1212 15:22:07.051385 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:22:07 crc kubenswrapper[5099]: I1212 15:22:07.429966 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:07 crc kubenswrapper[5099]: I1212 15:22:07.659357 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:07 crc kubenswrapper[5099]: I1212 15:22:07.661043 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:07 crc kubenswrapper[5099]: I1212 15:22:07.661097 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:07 crc kubenswrapper[5099]: I1212 15:22:07.661113 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:07 crc kubenswrapper[5099]: I1212 15:22:07.661146 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:22:07 crc kubenswrapper[5099]: E1212 15:22:07.671182 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:22:08 crc kubenswrapper[5099]: I1212 15:22:08.351603 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:22:08 crc kubenswrapper[5099]: I1212 15:22:08.555097 5099 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-22x2j" Dec 12 15:22:08 crc kubenswrapper[5099]: I1212 15:22:08.562148 5099 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-22x2j" Dec 12 15:22:08 crc kubenswrapper[5099]: I1212 15:22:08.664024 5099 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 15:22:09 crc kubenswrapper[5099]: I1212 15:22:09.168442 5099 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 15:22:09 crc kubenswrapper[5099]: I1212 15:22:09.659160 5099 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-11 15:17:08 +0000 UTC" deadline="2026-01-05 13:42:09.745603461 +0000 UTC" Dec 12 15:22:09 crc kubenswrapper[5099]: I1212 15:22:09.659268 5099 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="574h20m0.086347596s" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.467078 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.468596 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.468700 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.468730 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:11 crc kubenswrapper[5099]: E1212 15:22:11.469515 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.469941 5099 scope.go:117] "RemoveContainer" containerID="d4d388861a573611518ba2d63cb05e9927bab8b17fffa8035029d80912f52d62" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.874633 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.877774 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57"} Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.878343 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.879714 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.879774 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:11 crc kubenswrapper[5099]: I1212 15:22:11.879789 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:11 crc kubenswrapper[5099]: E1212 15:22:11.880723 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:22:12 crc kubenswrapper[5099]: E1212 15:22:12.757862 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.881422 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.882205 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.883960 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57" exitCode=255 Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.884012 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57"} Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.884062 5099 scope.go:117] "RemoveContainer" containerID="d4d388861a573611518ba2d63cb05e9927bab8b17fffa8035029d80912f52d62" Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.884331 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.884986 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.885019 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.885049 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:12 crc kubenswrapper[5099]: E1212 15:22:12.885579 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:22:12 crc kubenswrapper[5099]: I1212 15:22:12.885879 5099 scope.go:117] "RemoveContainer" containerID="42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57" Dec 12 15:22:12 crc kubenswrapper[5099]: E1212 15:22:12.886143 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:22:13 crc kubenswrapper[5099]: I1212 15:22:13.888254 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.671969 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.673092 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.673164 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.673188 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.673387 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.682308 5099 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.682612 5099 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 12 15:22:14 crc kubenswrapper[5099]: E1212 15:22:14.682637 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.685987 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.686021 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.686030 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.686044 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.686074 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:14Z","lastTransitionTime":"2025-12-12T15:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:14 crc kubenswrapper[5099]: E1212 15:22:14.754244 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.762026 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.762068 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.762079 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.762093 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.762103 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:14Z","lastTransitionTime":"2025-12-12T15:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:14 crc kubenswrapper[5099]: E1212 15:22:14.775230 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.782238 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.782277 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.782287 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.782301 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.782310 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:14Z","lastTransitionTime":"2025-12-12T15:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:14 crc kubenswrapper[5099]: E1212 15:22:14.794084 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.801308 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.801350 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.801392 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.801439 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:14 crc kubenswrapper[5099]: I1212 15:22:14.801453 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:14Z","lastTransitionTime":"2025-12-12T15:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:14 crc kubenswrapper[5099]: E1212 15:22:14.814160 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:14 crc kubenswrapper[5099]: E1212 15:22:14.814332 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:22:14 crc kubenswrapper[5099]: E1212 15:22:14.814365 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:14 crc kubenswrapper[5099]: E1212 15:22:14.914809 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.016323 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.117075 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.218368 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.319295 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.420640 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.521060 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.622189 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.723174 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.823319 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:15 crc kubenswrapper[5099]: E1212 15:22:15.924064 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.025215 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.126627 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.227781 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.328632 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.438610 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.539687 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.640508 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.741414 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.842213 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:16 crc kubenswrapper[5099]: I1212 15:22:16.921552 5099 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:22:16 crc kubenswrapper[5099]: E1212 15:22:16.943425 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.044440 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.144842 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.245592 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.346199 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.447083 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.547443 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.648523 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.749541 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.850459 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:17 crc kubenswrapper[5099]: E1212 15:22:17.950882 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.051920 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.153055 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.254103 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.354747 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.455431 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.555855 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.656508 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.756943 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.858242 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:18 crc kubenswrapper[5099]: E1212 15:22:18.959497 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.060595 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.161530 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.262091 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.362957 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.463995 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.565322 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.666330 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.767408 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.867815 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:19 crc kubenswrapper[5099]: E1212 15:22:19.968826 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.069776 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.170964 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.271735 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.372426 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.473003 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.573891 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.675026 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.775486 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.876448 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:20 crc kubenswrapper[5099]: E1212 15:22:20.976637 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.077838 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.178578 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.279561 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.380197 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.389819 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.390998 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.392947 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.393058 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.393079 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.394264 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.394986 5099 scope.go:117] "RemoveContainer" containerID="42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.395636 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.480354 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.580559 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.681582 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.782040 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.878646 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.883042 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.921446 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.922820 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.922909 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.922960 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.924260 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:22:21 crc kubenswrapper[5099]: I1212 15:22:21.924885 5099 scope.go:117] "RemoveContainer" containerID="42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.925383 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:22:21 crc kubenswrapper[5099]: E1212 15:22:21.984146 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.084704 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.185067 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.285226 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.386486 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.487628 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.588634 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.688978 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.758314 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.789590 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.890744 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:22 crc kubenswrapper[5099]: E1212 15:22:22.991753 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.092545 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.193507 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.294199 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.395097 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.495479 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.596370 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.696841 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.797735 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.897935 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:23 crc kubenswrapper[5099]: E1212 15:22:23.999434 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.100939 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.201504 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.301732 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.402477 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.466293 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.467343 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.467410 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.467420 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.467880 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.504017 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.604931 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.706117 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.806831 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.907410 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.945251 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.950010 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.950045 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.950054 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.950068 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.950077 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:24Z","lastTransitionTime":"2025-12-12T15:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.962078 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.967483 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.967574 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.967588 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.967604 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.967617 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:24Z","lastTransitionTime":"2025-12-12T15:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.980091 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.983709 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.983764 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.983791 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.983812 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:24 crc kubenswrapper[5099]: I1212 15:22:24.983824 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:24Z","lastTransitionTime":"2025-12-12T15:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:24 crc kubenswrapper[5099]: E1212 15:22:24.996194 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:25 crc kubenswrapper[5099]: I1212 15:22:25.000673 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:25 crc kubenswrapper[5099]: I1212 15:22:25.000723 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:25 crc kubenswrapper[5099]: I1212 15:22:25.000734 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:25 crc kubenswrapper[5099]: I1212 15:22:25.000748 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:25 crc kubenswrapper[5099]: I1212 15:22:25.000760 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:25Z","lastTransitionTime":"2025-12-12T15:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.019365 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.019557 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.019586 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.119714 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.220766 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.321404 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.422381 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.522935 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.623967 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.724451 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.825475 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:25 crc kubenswrapper[5099]: E1212 15:22:25.925650 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.026016 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.127174 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.227336 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.328109 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.428921 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.529974 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.630270 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.731097 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.831826 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:26 crc kubenswrapper[5099]: E1212 15:22:26.932533 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.033784 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.135443 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.236061 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.336567 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.437389 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.537941 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.638424 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.739622 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.840365 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:27 crc kubenswrapper[5099]: E1212 15:22:27.940842 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.041231 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.142354 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.243718 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.344386 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.444807 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.545657 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.646559 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.746726 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.847149 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:28 crc kubenswrapper[5099]: E1212 15:22:28.947769 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.048837 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.149263 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.249779 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.350443 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.451371 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.552054 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.653250 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.753628 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.854145 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:29 crc kubenswrapper[5099]: E1212 15:22:29.955350 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.055941 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.172915 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.273330 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.374138 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.474903 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.575067 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.676290 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.777601 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.878222 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:30 crc kubenswrapper[5099]: E1212 15:22:30.978452 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.079375 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.180250 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.280811 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.381863 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.481994 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.583037 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.683816 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.784483 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.885516 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:31 crc kubenswrapper[5099]: E1212 15:22:31.985966 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.087085 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: I1212 15:22:32.090098 5099 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.187950 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.288284 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.388508 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.489119 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.590267 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.690472 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: I1212 15:22:32.735329 5099 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.758945 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.791460 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.891861 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:32 crc kubenswrapper[5099]: E1212 15:22:32.992650 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.092871 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.193741 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.293865 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.394481 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: I1212 15:22:33.466626 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:22:33 crc kubenswrapper[5099]: I1212 15:22:33.468442 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:33 crc kubenswrapper[5099]: I1212 15:22:33.468502 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:33 crc kubenswrapper[5099]: I1212 15:22:33.468522 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.469150 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:22:33 crc kubenswrapper[5099]: I1212 15:22:33.469498 5099 scope.go:117] "RemoveContainer" containerID="42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.469776 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.495376 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.596445 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.697420 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.798238 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.898850 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:33 crc kubenswrapper[5099]: E1212 15:22:33.999617 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.100423 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.202309 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.303120 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.404132 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.505015 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.606453 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.707421 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.808701 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:34 crc kubenswrapper[5099]: E1212 15:22:34.909400 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.010418 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.111484 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.211623 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.312459 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.412863 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.417131 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.424240 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.424481 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.424588 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.424705 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.424801 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:35Z","lastTransitionTime":"2025-12-12T15:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.438979 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.444380 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.444476 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.444507 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.444552 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.444590 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:35Z","lastTransitionTime":"2025-12-12T15:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.457122 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.461376 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.461452 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.461482 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.461514 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.461534 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:35Z","lastTransitionTime":"2025-12-12T15:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.472735 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.476732 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.477029 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.477117 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.477212 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:35 crc kubenswrapper[5099]: I1212 15:22:35.477290 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:35Z","lastTransitionTime":"2025-12-12T15:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.488524 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.488731 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.513092 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.614221 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.715580 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.816406 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:35 crc kubenswrapper[5099]: E1212 15:22:35.916603 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.017155 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.118312 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.219269 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.320011 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.420840 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.521671 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.622598 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.723746 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.824552 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:36 crc kubenswrapper[5099]: E1212 15:22:36.925007 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.025742 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.125904 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.226777 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.327927 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.428906 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.530049 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.630176 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.731147 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.832212 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:37 crc kubenswrapper[5099]: E1212 15:22:37.932588 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.032788 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.133214 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.233369 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.333991 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.435168 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.535801 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.636621 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.737260 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.837811 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:38 crc kubenswrapper[5099]: E1212 15:22:38.938959 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.040021 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.140877 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.241299 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.342050 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.442861 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.543293 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.644416 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.744560 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.845326 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:39 crc kubenswrapper[5099]: E1212 15:22:39.945789 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.046255 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.147045 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.248094 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.348572 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.449792 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.550184 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.650709 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.751219 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.851849 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:40 crc kubenswrapper[5099]: E1212 15:22:40.952537 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.053179 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.154226 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.255298 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.356264 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.457353 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.557624 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.658364 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.758498 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.859024 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:41 crc kubenswrapper[5099]: E1212 15:22:41.959365 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.060311 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.160569 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.261345 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.361731 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.462745 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.562967 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.663616 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.759846 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.763759 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.864885 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:42 crc kubenswrapper[5099]: E1212 15:22:42.965400 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.066623 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.167389 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.268370 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.368534 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.469468 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.570560 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.671478 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.771909 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.873346 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:43 crc kubenswrapper[5099]: E1212 15:22:43.974289 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: E1212 15:22:44.074796 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: E1212 15:22:44.175792 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: E1212 15:22:44.276935 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: E1212 15:22:44.378319 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: E1212 15:22:44.479453 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: E1212 15:22:44.580411 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: E1212 15:22:44.680913 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: E1212 15:22:44.781900 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.849225 5099 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.884074 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.884118 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.884131 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.884160 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.884178 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:44Z","lastTransitionTime":"2025-12-12T15:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.891296 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.908167 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.986111 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.986178 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.986194 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.986213 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:44 crc kubenswrapper[5099]: I1212 15:22:44.986228 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:44Z","lastTransitionTime":"2025-12-12T15:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.005621 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.088031 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.088078 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.088091 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.088108 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.088123 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.106216 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.190475 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.190524 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.190537 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.190553 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.190566 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.207397 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.292150 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.292189 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.292199 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.292212 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.292222 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.394268 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.394333 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.394355 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.394379 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.394396 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.497298 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.497366 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.497379 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.497398 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.497412 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.599767 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.599823 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.599837 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.599857 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.599872 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.702341 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.702395 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.702406 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.702420 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.702430 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.709286 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.709332 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.709344 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.709363 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.709384 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.723889 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.728027 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.728084 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.728103 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.728120 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.728132 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.738841 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.742045 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.742097 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.742109 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.742121 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.742134 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.754460 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.759453 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.759509 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.759528 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.759547 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.759561 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.771258 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.774467 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.774545 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.774570 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.774594 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.774611 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.787812 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"233ad4b4-bf81-4ffb-bf82-a5f43f9c8cc3\\\",\\\"systemUUID\\\":\\\"c9a395de-6bcb-4f0c-8f70-eabd9ff65c63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.788008 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.790825 5099 apiserver.go:52] "Watching apiserver" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.799899 5099 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.800716 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm","openshift-etcd/etcd-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-qwqjz","openshift-multus/network-metrics-daemon-tpqns","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-image-registry/node-ca-g4h65","openshift-dns/node-resolver-jvkrf","openshift-multus/multus-additional-cni-plugins-5q9gc","openshift-multus/multus-g2sj6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-5glsp","openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.806080 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.806160 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.806294 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.811266 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.811644 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.811846 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.811868 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.811878 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.811889 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.811900 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.813721 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.813776 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.813833 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.814381 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.815998 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.817939 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.818128 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.818396 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.818718 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.820465 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.820719 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.839000 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.850901 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.857310 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.857480 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.859978 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.860010 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.860203 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.860413 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.864111 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.871685 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.874155 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.874402 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.874542 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.874987 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.875721 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.876910 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.877480 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.879813 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.879913 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.889233 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.890468 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.892053 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.892282 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.892442 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.893657 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.893980 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.896830 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.896867 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.900604 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.900835 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.900953 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.901151 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.901388 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.902537 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.902697 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.902534 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.902853 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.902984 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.905606 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2sj6" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.905382 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.909323 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.909463 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.913923 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.913982 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.913998 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.914017 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.914029 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:45Z","lastTransitionTime":"2025-12-12T15:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.915785 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.917506 5099 scope.go:117] "RemoveContainer" containerID="42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57" Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.917843 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.918118 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.918129 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.918397 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.918627 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.931364 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.943514 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.951018 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tpqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3e8066-7769-4174-b1af-e18146cd80c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzzbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzzbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tpqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.963164 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf0434dc-0e1b-4efe-841c-9462c3097a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5q9gc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.970966 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4h65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833ac15b-be77-479f-acfb-bcf20e4e13f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4h65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977273 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a5f848ed-876f-4b53-83dc-189cf18f5411-hosts-file\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977397 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977436 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7slsx\" (UniqueName: \"kubernetes.io/projected/eeb52909-7783-4c4f-a55a-9f4333d025bc-kube-api-access-7slsx\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977478 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977507 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977529 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/eeb52909-7783-4c4f-a55a-9f4333d025bc-rootfs\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977555 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eeb52909-7783-4c4f-a55a-9f4333d025bc-proxy-tls\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977581 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brp7m\" (UniqueName: \"kubernetes.io/projected/a5f848ed-876f-4b53-83dc-189cf18f5411-kube-api-access-brp7m\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977724 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977759 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977785 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977821 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977859 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.977990 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978023 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978049 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/833ac15b-be77-479f-acfb-bcf20e4e13f6-host\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978072 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/833ac15b-be77-479f-acfb-bcf20e4e13f6-serviceca\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978100 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978137 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzbt\" (UniqueName: \"kubernetes.io/projected/be3e8066-7769-4174-b1af-e18146cd80c0-kube-api-access-wzzbt\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978163 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978194 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxcm\" (UniqueName: \"kubernetes.io/projected/052c66d7-f3c6-4f4b-97e0-70e9e533308c-kube-api-access-plxcm\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978232 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978257 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978284 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2tb\" (UniqueName: \"kubernetes.io/projected/833ac15b-be77-479f-acfb-bcf20e4e13f6-kube-api-access-pj2tb\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978310 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978353 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a5f848ed-876f-4b53-83dc-189cf18f5411-tmp-dir\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978433 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978463 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978491 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978527 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eeb52909-7783-4c4f-a55a-9f4333d025bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.978995 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.979157 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.979195 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.979221 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.979290 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:46.479244017 +0000 UTC m=+104.583152658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:45 crc kubenswrapper[5099]: E1212 15:22:45.979634 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:46.479617956 +0000 UTC m=+104.583526647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.980415 5099 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 15:22:45 crc kubenswrapper[5099]: I1212 15:22:45.980779 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.003320 5099 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.006653 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.008372 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.008410 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.008435 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.008650 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:46.508622351 +0000 UTC m=+104.612531002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.008928 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.008989 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.009021 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.009092 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:46.509071723 +0000 UTC m=+104.612980384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.010614 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.013309 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.015197 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.015807 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.015844 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.015858 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.015876 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.015890 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.017268 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.017374 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bed312a-3ed9-4851-956d-44a1dbd17ad6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://dbd6070b9997fbf922299d2d5de614331d2859a1c879dcd67076483f83acb766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0eb801dce0dfdca8c998ce3b3e2ed9575a4d66a116287d1cd279d5a0efbe3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4bb03b3f1c6d08ad647cd4706c0a9933059333767708a3d384626a4d0af666dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b13829365333f1323547984f09137d936b7e50c3b6bac449d39dd606bee7f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b8053f468d6ef24360493fcb9938815bf9bd3de045ea4f1e944aac89a2a7f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cdfce8f83f2327b25c7f22f1868bafe6e8d636838d248b903618e8c2a4fb08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cdfce8f83f2327b25c7f22f1868bafe6e8d636838d248b903618e8c2a4fb08b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:21:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:21:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://670e55a23cf26db079e729fc2d303305c6cfc976cfc2625560170a9773b89d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e55a23cf26db079e729fc2d303305c6cfc976cfc2625560170a9773b89d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:21:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:21:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://32bb3d2e134475f8b0863f41ba8879d17927a64dc1ee9202458b42cd0c1fcf19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32bb3d2e134475f8b0863f41ba8879d17927a64dc1ee9202458b42cd0c1fcf19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:21:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:21:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.031588 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.040555 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.048051 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-jvkrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5f848ed-876f-4b53-83dc-189cf18f5411\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brp7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jvkrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.059653 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb52909-7783-4c4f-a55a-9f4333d025bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7slsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7slsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwqjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086039 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086103 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086035 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75cef4a7-6f0c-4608-baad-b1cff03defa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e55738e03de3e06ef6f82a4c01bbda678164252cc65a89c3562e80e5451f2cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://16aff26b4dfdededb7035f2088c1478159d9bc5ea17e8c5d497fb895944d4da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://020d40e58135ecdec57aa9111436c88b85775324c180980117e6b41f0e1249a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9dd1495d2e8782041c8b297544c144ad7c36068b9b6b45168bfae257103e4131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086125 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086358 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086383 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086404 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086434 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086462 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086484 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086505 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086528 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086551 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086573 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086596 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086618 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086640 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086679 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086701 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086730 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086733 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086750 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086772 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086794 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086814 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086839 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086862 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086882 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086904 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086926 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086946 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.086972 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087110 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087140 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087143 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087186 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087211 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087234 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087254 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087274 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087297 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087319 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087345 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087368 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087389 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087410 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087441 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087464 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087484 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087507 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087530 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087554 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087577 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087601 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087641 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087695 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087723 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087744 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087776 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087810 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087832 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087853 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087875 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087896 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087921 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087944 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087965 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.087987 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088009 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088035 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088056 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088078 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088102 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088134 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088183 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088206 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088230 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088252 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088275 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088298 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088321 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088344 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088366 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088387 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088409 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088427 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088432 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088486 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088517 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088540 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088565 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088588 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088612 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088643 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088684 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088712 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088738 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088764 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088788 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088813 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088840 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088866 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088892 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088941 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088969 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.088995 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089020 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089045 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089069 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089114 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089155 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089180 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089216 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089250 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089278 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089304 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089353 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089380 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089408 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089433 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089457 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089484 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089510 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089582 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089611 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089638 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089736 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089867 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.089985 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090160 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090203 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090238 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090263 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090288 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090315 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090342 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090360 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090371 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090429 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090457 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090485 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090514 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090542 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090571 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090602 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090639 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090700 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090743 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090771 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090807 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090841 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090830 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090879 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090876 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.090966 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091002 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091025 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091060 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091096 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091128 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091159 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091197 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091225 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091253 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091286 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091314 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091341 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091367 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091379 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091392 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091443 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091474 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091507 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091538 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091566 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091594 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091635 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091695 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091737 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091798 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091825 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091857 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091896 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091931 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091967 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091999 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092030 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092070 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092100 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092128 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092158 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092683 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.094740 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.094786 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.094842 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.094874 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095025 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095077 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095105 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095150 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095177 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095228 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095258 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095286 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095336 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095394 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095428 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095477 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.095507 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.108076 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4319ff6c-20c8-4f26-b996-c9552942c001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69f2d84d6e1a7de5e88efac2252c13f4edec36566a6e36ccf8997c2c0e770bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74f0e2e484450923bf824ae37cf7682525765962ec622b408e3e6cee712d62f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0e2e484450923bf824ae37cf7682525765962ec622b408e3e6cee712d62f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:21:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:21:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.133881 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.148759 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.148918 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091593 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.149000 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.091643 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092094 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.149065 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.150543 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.152345 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.153681 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.154242 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.154948 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155022 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155088 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155129 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155171 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155295 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155386 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155418 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155443 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155467 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155499 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155524 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155640 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.155947 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.156038 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.156052 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.157778 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.157708 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.157828 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.160247 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.160463 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.160456 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092602 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092654 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092702 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.093123 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.097793 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.098849 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.100989 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.101792 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.102475 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.106288 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.107702 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.108085 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.108242 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.108351 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.109201 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.109951 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.110086 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.110268 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.110406 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.110532 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.110758 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.111571 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.111777 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.112205 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.112229 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.112747 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.113128 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.113350 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.113519 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.113607 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.113715 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.113907 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.114084 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.114073 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.135630 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.135982 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.137610 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.138411 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.139278 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.139622 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.140342 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.140896 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.141270 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.141450 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.144128 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.144412 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.146770 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.148080 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.148727 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.160606 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.160795 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161095 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161308 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161328 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161364 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161418 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.160518 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161491 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161548 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161573 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.162107 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.162465 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.162481 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.162924 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.162955 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.161957 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.163168 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.164438 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.164680 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.165365 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.165384 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.165576 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.165834 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.165890 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.165955 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.166011 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.166549 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.166430 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.167416 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.167493 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.167844 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.167934 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.092529 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.168225 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.168499 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.168593 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.168788 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.168791 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.168867 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.169038 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.169219 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.169325 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.169438 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171099 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.169677 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171227 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171247 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171521 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171634 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171722 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171879 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171916 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.172008 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.172782 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.171129 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.172382 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.172616 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.172526 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.172957 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.172990 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.173258 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:46.67323099 +0000 UTC m=+104.777139631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.173636 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.173847 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175259 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175358 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175550 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175718 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175908 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176010 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175741 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175714 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175865 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.175851 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176184 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176245 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176368 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176412 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176585 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176645 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176701 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.176802 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177111 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177332 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177557 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177613 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177682 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177716 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177747 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177769 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177774 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177795 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177810 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177811 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177822 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177884 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178188 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eeb52909-7783-4c4f-a55a-9f4333d025bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.177561 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178497 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-hostroot\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178551 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmrrl\" (UniqueName: \"kubernetes.io/projected/76a2810e-710e-4f57-90b7-23d7bdfea6d8-kube-api-access-qmrrl\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178578 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-config\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178604 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178638 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a5f848ed-876f-4b53-83dc-189cf18f5411-hosts-file\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178682 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-ovn\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178707 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-env-overrides\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178747 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-os-release\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178770 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovn-node-metrics-cert\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.178918 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a5f848ed-876f-4b53-83dc-189cf18f5411-hosts-file\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.179542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eeb52909-7783-4c4f-a55a-9f4333d025bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.179565 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.179908 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.179949 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-system-cni-dir\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.179716 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180008 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-os-release\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180482 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180525 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180531 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-netns\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180584 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-log-socket\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180607 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180641 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/833ac15b-be77-479f-acfb-bcf20e4e13f6-serviceca\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180691 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-system-cni-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180710 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76a2810e-710e-4f57-90b7-23d7bdfea6d8-cni-binary-copy\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180728 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-cni-bin\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180744 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-daemon-config\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180759 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-multus-certs\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180778 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-kubelet\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180796 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-systemd-units\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180811 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-slash\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180825 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180841 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-script-lib\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180869 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzbt\" (UniqueName: \"kubernetes.io/projected/be3e8066-7769-4174-b1af-e18146cd80c0-kube-api-access-wzzbt\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180914 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cnibin\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180951 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-systemd\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180974 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-bin\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181005 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a5f848ed-876f-4b53-83dc-189cf18f5411-tmp-dir\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181058 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181078 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-node-log\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181104 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7slsx\" (UniqueName: \"kubernetes.io/projected/eeb52909-7783-4c4f-a55a-9f4333d025bc-kube-api-access-7slsx\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181127 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-cni-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181153 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-k8s-cni-cncf-io\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181174 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-conf-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181208 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181238 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/eeb52909-7783-4c4f-a55a-9f4333d025bc-rootfs\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181262 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eeb52909-7783-4c4f-a55a-9f4333d025bc-proxy-tls\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181286 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-socket-dir-parent\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181308 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-cnibin\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181349 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-brp7m\" (UniqueName: \"kubernetes.io/projected/a5f848ed-876f-4b53-83dc-189cf18f5411-kube-api-access-brp7m\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181381 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181405 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181439 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181462 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22clz\" (UniqueName: \"kubernetes.io/projected/0fd18053-827f-48f8-b64b-4cc0035ce4ad-kube-api-access-22clz\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181485 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htrbs\" (UniqueName: \"kubernetes.io/projected/cf0434dc-0e1b-4efe-841c-9462c3097a2c-kube-api-access-htrbs\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180628 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.180937 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181116 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181190 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181221 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.182562 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/eeb52909-7783-4c4f-a55a-9f4333d025bc-rootfs\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181229 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181282 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181814 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.181870 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.182630 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.182770 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/833ac15b-be77-479f-acfb-bcf20e4e13f6-host\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.182814 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.182836 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/833ac15b-be77-479f-acfb-bcf20e4e13f6-host\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.182805 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-etc-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.182876 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.182888 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs podName:be3e8066-7769-4174-b1af-e18146cd80c0 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:46.682869737 +0000 UTC m=+104.786778378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs") pod "network-metrics-daemon-tpqns" (UID: "be3e8066-7769-4174-b1af-e18146cd80c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184489 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184557 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-netns\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184591 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-cni-multus\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184622 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-kubelet\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184655 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-etc-kubernetes\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184765 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184820 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184929 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-var-lib-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184955 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-netd\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.184985 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cni-binary-copy\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.185014 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plxcm\" (UniqueName: \"kubernetes.io/projected/052c66d7-f3c6-4f4b-97e0-70e9e533308c-kube-api-access-plxcm\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.185066 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pj2tb\" (UniqueName: \"kubernetes.io/projected/833ac15b-be77-479f-acfb-bcf20e4e13f6-kube-api-access-pj2tb\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.185203 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.185797 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.185983 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.186023 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.186046 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a5f848ed-876f-4b53-83dc-189cf18f5411-tmp-dir\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.186319 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.186594 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.186693 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.186967 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.188771 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.188962 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.193547 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.194767 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eeb52909-7783-4c4f-a55a-9f4333d025bc-proxy-tls\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.195459 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.195499 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.195525 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.195544 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.195559 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.195574 5099 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.195549 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.195588 5099 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196107 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196257 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196195 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196549 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196704 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196728 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196741 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196750 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196765 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196775 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196784 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196794 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196803 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196813 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196823 5099 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196833 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196841 5099 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196850 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196860 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.196869 5099 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197002 5099 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197014 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197027 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197063 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197418 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197494 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197621 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197649 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197684 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197697 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197712 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197726 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197739 5099 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197754 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197767 5099 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197778 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197790 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197803 5099 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197807 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197815 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197851 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197863 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197972 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197825 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"052c66d7-f3c6-4f4b-97e0-70e9e533308c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plxcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plxcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-lswnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197983 5099 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198170 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198185 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198199 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198219 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198233 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198246 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198257 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198268 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198281 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198294 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198308 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198320 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198332 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198344 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198356 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198368 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198380 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198392 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198404 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198420 5099 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198432 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198445 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198458 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198471 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198484 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198497 5099 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198509 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198521 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198534 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198536 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/833ac15b-be77-479f-acfb-bcf20e4e13f6-serviceca\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198546 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198561 5099 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198574 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198588 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198602 5099 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198614 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198628 5099 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198642 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197951 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198654 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198698 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198712 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198725 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198738 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198749 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198762 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198776 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198788 5099 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198800 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198813 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198824 5099 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198836 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198849 5099 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198861 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198877 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198890 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198903 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198914 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198927 5099 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198940 5099 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198953 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198967 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198980 5099 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.198995 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199008 5099 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199020 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199032 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199045 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199057 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199069 5099 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199080 5099 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199092 5099 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199107 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199120 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199133 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199145 5099 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199158 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199169 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199182 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199195 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199207 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199219 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199233 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199245 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199257 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.197962 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199626 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199788 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.199841 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzbt\" (UniqueName: \"kubernetes.io/projected/be3e8066-7769-4174-b1af-e18146cd80c0-kube-api-access-wzzbt\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200067 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200092 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200105 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200125 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200137 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200149 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200161 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200228 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200241 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200252 5099 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200264 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200278 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200291 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200305 5099 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200319 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200331 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200345 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200358 5099 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200372 5099 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200385 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200398 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200411 5099 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200423 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200435 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200447 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200462 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200475 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200488 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200502 5099 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200514 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200526 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200540 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200553 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200565 5099 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200576 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200587 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200599 5099 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200626 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200645 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200700 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200726 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200742 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200763 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200776 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200794 5099 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200981 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.201461 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.200269 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.201791 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.202435 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.202536 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.202779 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.202805 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.202807 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.202999 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.203178 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.203249 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.203530 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.203745 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.203774 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.205401 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.205927 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.206475 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj2tb\" (UniqueName: \"kubernetes.io/projected/833ac15b-be77-479f-acfb-bcf20e4e13f6-kube-api-access-pj2tb\") pod \"node-ca-g4h65\" (UID: \"833ac15b-be77-479f-acfb-bcf20e4e13f6\") " pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.202396 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.207134 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-brp7m\" (UniqueName: \"kubernetes.io/projected/a5f848ed-876f-4b53-83dc-189cf18f5411-kube-api-access-brp7m\") pod \"node-resolver-jvkrf\" (UID: \"a5f848ed-876f-4b53-83dc-189cf18f5411\") " pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.207199 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.207589 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.207946 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.208038 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.208170 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.209149 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g2sj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76a2810e-710e-4f57-90b7-23d7bdfea6d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmrrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2sj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.211568 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.213911 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.214390 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.214525 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.215856 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxcm\" (UniqueName: \"kubernetes.io/projected/052c66d7-f3c6-4f4b-97e0-70e9e533308c-kube-api-access-plxcm\") pod \"ovnkube-control-plane-57b78d8988-lswnm\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.217402 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7slsx\" (UniqueName: \"kubernetes.io/projected/eeb52909-7783-4c4f-a55a-9f4333d025bc-kube-api-access-7slsx\") pod \"machine-config-daemon-qwqjz\" (UID: \"eeb52909-7783-4c4f-a55a-9f4333d025bc\") " pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.221162 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.226603 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.227017 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0710a3ca-a09c-439d-97ba-3f61e859fc53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ca3285329aa4c3479abd6f07bee49da719a2390ebd7853988f5b4976e7674ea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a08390e968f9efeb59c99fd00241be16290395dce5767e17f0e950a1770db419\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://261826a2e5b8909122df752fe1f4ad82d30626fd2d53ea720df5e71448d34d14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:22:12Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1212 15:22:12.129770 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:22:12.129913 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 15:22:12.130942 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-854135692/tls.crt::/tmp/serving-cert-854135692/tls.key\\\\\\\"\\\\nI1212 15:22:12.633124 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:22:12.634842 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:22:12.634860 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:22:12.634891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:22:12.634900 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:22:12.638457 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1212 15:22:12.638468 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:22:12.638506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:22:12.638514 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:22:12.638519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:22:12.638523 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:22:12.638526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:22:12.638530 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:22:12.641497 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e83bfe0224eb5c69f23132252eaba9bc3bd0dac62e3a19cf0262162b4c627b2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fc9a00e37b1547b4a00b0a5818ba6fd62e1622ac01a848616d6ea2cb5ae35ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fc9a00e37b1547b4a00b0a5818ba6fd62e1622ac01a848616d6ea2cb5ae35ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:21:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:21:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.229725 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g4h65" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.238692 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d0db5f-2a8e-4b64-91e3-8cd8497fe20c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c1b0af41b3f61573fb3e181d47ddd0779f46f4d5d772f73b74bd2ac804554851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99e8ac8cca759b5e97b4837de2b4c47f48e65eba25093437a75f7a3fd54ccf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f29b607f977eaecf63aae1de43b25034927ee39b72e46ad06b2f377326508ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:21:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://83e5bb42c0cbebaa902028ee4ec0f0e538ae7f1943973db4bd916ee9243d2905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83e5bb42c0cbebaa902028ee4ec0f0e538ae7f1943973db4bd916ee9243d2905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:21:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:21:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.244086 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.247060 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.251201 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:46 crc kubenswrapper[5099]: W1212 15:22:46.255201 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833ac15b_be77_479f_acfb_bcf20e4e13f6.slice/crio-52bdcafb54f24588ff53e82dcc2084a9155f39c024a4e067d9c763ebdabcc73d WatchSource:0}: Error finding container 52bdcafb54f24588ff53e82dcc2084a9155f39c024a4e067d9c763ebdabcc73d: Status 404 returned error can't find the container with id 52bdcafb54f24588ff53e82dcc2084a9155f39c024a4e067d9c763ebdabcc73d Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.256830 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd18053-827f-48f8-b64b-4cc0035ce4ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22clz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5glsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.277813 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.277841 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.277850 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.277864 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.277873 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302207 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-netns\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302640 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-log-socket\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302681 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302711 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-system-cni-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302727 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76a2810e-710e-4f57-90b7-23d7bdfea6d8-cni-binary-copy\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302742 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-cni-bin\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302759 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-daemon-config\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302775 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-multus-certs\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302790 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-kubelet\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302811 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-systemd-units\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302827 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-slash\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302842 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302859 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-script-lib\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302885 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cnibin\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302901 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-systemd\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302916 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-bin\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302933 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-node-log\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302949 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-cni-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302964 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-k8s-cni-cncf-io\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.302981 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-conf-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303000 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-socket-dir-parent\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303015 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-cnibin\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303080 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303099 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-22clz\" (UniqueName: \"kubernetes.io/projected/0fd18053-827f-48f8-b64b-4cc0035ce4ad-kube-api-access-22clz\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303128 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-htrbs\" (UniqueName: \"kubernetes.io/projected/cf0434dc-0e1b-4efe-841c-9462c3097a2c-kube-api-access-htrbs\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303146 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-etc-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303164 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303181 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-netns\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303196 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-cni-multus\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303219 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-kubelet\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303235 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-etc-kubernetes\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303252 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-var-lib-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303279 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-netd\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303294 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cni-binary-copy\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303324 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-hostroot\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303341 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmrrl\" (UniqueName: \"kubernetes.io/projected/76a2810e-710e-4f57-90b7-23d7bdfea6d8-kube-api-access-qmrrl\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303354 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-config\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303371 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303387 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-ovn\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303401 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-env-overrides\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303418 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-os-release\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303432 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovn-node-metrics-cert\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303453 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-system-cni-dir\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303467 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-os-release\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303487 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303552 5099 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303571 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303585 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303598 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303614 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303624 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303639 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303648 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303656 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303690 5099 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303704 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303714 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303722 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303738 5099 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303746 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303754 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303768 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303776 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303785 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303794 5099 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303803 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303811 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303820 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303829 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303838 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303848 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303859 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303868 5099 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303877 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303886 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303895 5099 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303904 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303912 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303921 5099 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303929 5099 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303937 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303945 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303953 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303961 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303969 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303978 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303986 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.303994 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304002 5099 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304010 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304020 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304029 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304037 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304047 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304055 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304296 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-netns\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304338 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304361 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-log-socket\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.305006 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.305073 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-system-cni-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.305716 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76a2810e-710e-4f57-90b7-23d7bdfea6d8-cni-binary-copy\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.305765 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-cni-bin\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.306393 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-multus-certs\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.306471 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-kubelet\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.306509 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-systemd-units\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.306554 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-slash\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.306596 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.304234 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.306803 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-hostroot\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.306935 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cni-binary-copy\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307434 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-etc-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307502 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307529 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-netns\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307559 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-cni-multus\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307583 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-var-lib-kubelet\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307610 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-etc-kubernetes\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307633 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-var-lib-openvswitch\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307657 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-netd\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307879 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-os-release\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307881 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-script-lib\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307918 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-cnibin\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307969 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-systemd\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.307999 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-bin\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.308033 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-node-log\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.308412 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/cf0434dc-0e1b-4efe-841c-9462c3097a2c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.308477 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-ovn\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.308757 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-system-cni-dir\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.308855 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf0434dc-0e1b-4efe-841c-9462c3097a2c-os-release\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.308931 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-env-overrides\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.308961 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-host-run-k8s-cni-cncf-io\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.308934 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-conf-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.309213 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-cni-dir\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.309614 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-config\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.309790 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-daemon-config\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.309895 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-multus-socket-dir-parent\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.309963 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76a2810e-710e-4f57-90b7-23d7bdfea6d8-cnibin\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.320471 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovn-node-metrics-cert\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.324169 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-htrbs\" (UniqueName: \"kubernetes.io/projected/cf0434dc-0e1b-4efe-841c-9462c3097a2c-kube-api-access-htrbs\") pod \"multus-additional-cni-plugins-5q9gc\" (UID: \"cf0434dc-0e1b-4efe-841c-9462c3097a2c\") " pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.325140 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-22clz\" (UniqueName: \"kubernetes.io/projected/0fd18053-827f-48f8-b64b-4cc0035ce4ad-kube-api-access-22clz\") pod \"ovnkube-node-5glsp\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.325145 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmrrl\" (UniqueName: \"kubernetes.io/projected/76a2810e-710e-4f57-90b7-23d7bdfea6d8-kube-api-access-qmrrl\") pod \"multus-g2sj6\" (UID: \"76a2810e-710e-4f57-90b7-23d7bdfea6d8\") " pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.379972 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.380027 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.380043 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.380061 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.380074 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.428072 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:22:46 crc kubenswrapper[5099]: W1212 15:22:46.439284 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-1aae96da4ceaf18ae1f62ff6041ef180d3dd00708e2e0ddf27a26e359f013d71 WatchSource:0}: Error finding container 1aae96da4ceaf18ae1f62ff6041ef180d3dd00708e2e0ddf27a26e359f013d71: Status 404 returned error can't find the container with id 1aae96da4ceaf18ae1f62ff6041ef180d3dd00708e2e0ddf27a26e359f013d71 Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.496351 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.496657 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jvkrf" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.499958 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.500045 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.500059 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.500075 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.500087 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.503071 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.506428 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.506799 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.507328 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.507439 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.507682 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.507706 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.507787 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:47.507764187 +0000 UTC m=+105.611672828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.508293 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.508346 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:47.508334921 +0000 UTC m=+105.612243562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.510932 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.513384 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.514112 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.515351 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.519509 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.522195 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.523013 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.524951 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.525226 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.527286 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.529737 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.530578 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.533176 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.534234 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.535265 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.536265 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.537926 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.539325 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2sj6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.539363 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.541348 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.542808 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.546033 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.547152 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.548219 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.550705 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.551885 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.554005 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.556290 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.558029 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.562030 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.563105 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.573632 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.576153 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.580094 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.583832 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.584891 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.586374 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.587341 5099 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.587504 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.593274 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: W1212 15:22:46.595735 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod052c66d7_f3c6_4f4b_97e0_70e9e533308c.slice/crio-0aa84869e63a7b068c8633ff0cfaf9a6cba717286d178777651d1ff358b55dac WatchSource:0}: Error finding container 0aa84869e63a7b068c8633ff0cfaf9a6cba717286d178777651d1ff358b55dac: Status 404 returned error can't find the container with id 0aa84869e63a7b068c8633ff0cfaf9a6cba717286d178777651d1ff358b55dac Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.596308 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.597817 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.599903 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.600565 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.604774 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.605254 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.605284 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.605293 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.605309 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.605319 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.605805 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.606377 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.608638 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.608723 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.608766 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.608966 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.608995 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.609008 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.609068 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:47.609050214 +0000 UTC m=+105.712958865 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.609163 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.609195 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.609211 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.609287 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:47.609267449 +0000 UTC m=+105.713176090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.609781 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.611432 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.613084 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.614273 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.615267 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.617102 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.618701 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: W1212 15:22:46.619621 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fd18053_827f_48f8_b64b_4cc0035ce4ad.slice/crio-9ef85c87cbec67f964cac674fb4487676e0cb4ade90699c1a7c3fc97b0c20bd2 WatchSource:0}: Error finding container 9ef85c87cbec67f964cac674fb4487676e0cb4ade90699c1a7c3fc97b0c20bd2: Status 404 returned error can't find the container with id 9ef85c87cbec67f964cac674fb4487676e0cb4ade90699c1a7c3fc97b0c20bd2 Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.620956 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.621936 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.623621 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.625153 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.709473 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.709659 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.709916 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.710017 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs podName:be3e8066-7769-4174-b1af-e18146cd80c0 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:47.709991692 +0000 UTC m=+105.813900343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs") pod "network-metrics-daemon-tpqns" (UID: "be3e8066-7769-4174-b1af-e18146cd80c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:46 crc kubenswrapper[5099]: E1212 15:22:46.710037 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:47.710028053 +0000 UTC m=+105.813936714 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.715330 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.715363 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.715375 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.715393 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.715405 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.820656 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.820965 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.820988 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.821006 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.821018 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.932170 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.932215 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.932227 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.932243 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:46 crc kubenswrapper[5099]: I1212 15:22:46.932256 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:46Z","lastTransitionTime":"2025-12-12T15:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.036905 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.036953 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.036964 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.036979 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.036989 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:47Z","lastTransitionTime":"2025-12-12T15:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.043996 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021" exitCode=0 Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.044196 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.044238 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"9ef85c87cbec67f964cac674fb4487676e0cb4ade90699c1a7c3fc97b0c20bd2"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.375069 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jvkrf" event={"ID":"a5f848ed-876f-4b53-83dc-189cf18f5411","Type":"ContainerStarted","Data":"e548e2409ff337bb5610c3e06b87adc4cf6bf61381ef77cbd24b6c576d5384df"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.375186 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.375219 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.375229 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.375240 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.375250 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:47Z","lastTransitionTime":"2025-12-12T15:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.385408 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"8f7ed5ba1be3fdcfa7e2fd2b5cdf800e15c29b4faf69f56d78630b8e27f374f3"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.389841 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"1de74fcfb826135f72ed7e288e145798ad5a1b9f4ddaf35d3546ef2c0e14c402"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.389886 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"1aae96da4ceaf18ae1f62ff6041ef180d3dd00708e2e0ddf27a26e359f013d71"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.392347 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"fcd1a3f89c6463b0d2003c333c37ae385286383d9d4d01f4e8e61f5e6bac9923"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.392385 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"359054b16b768ba7ef174d5513affde0abd7b6a959a2df04eea3c3419b80a0bc"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.392396 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"e597960c7f51afe52deadae00ba8fe32260bc52493e0c3e7de53403adde8fef7"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.397741 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2sj6" event={"ID":"76a2810e-710e-4f57-90b7-23d7bdfea6d8","Type":"ContainerStarted","Data":"7bb2a42ef66385c829ece927cb9dc792d7a581e8a3ecd1eeb142354f227a5b7f"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.402312 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"083cdafdeff5eefadf2c78beacb4a231fefe181de777a1665ddc767a6f089e14"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.402351 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"623a02664671f643abec7351edbf3f419617b617f89e3af24bc32b269261b32b"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.403535 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g4h65" event={"ID":"833ac15b-be77-479f-acfb-bcf20e4e13f6","Type":"ContainerStarted","Data":"90886fb50c0b08846f7c237bb19a84c034f483805074450a3932f9d954e89391"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.403568 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g4h65" event={"ID":"833ac15b-be77-479f-acfb-bcf20e4e13f6","Type":"ContainerStarted","Data":"52bdcafb54f24588ff53e82dcc2084a9155f39c024a4e067d9c763ebdabcc73d"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.405561 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerStarted","Data":"5683f33423d305506fc6811baf1aae16bfbc8fb74d4247265936a9611271cd6f"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.408491 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" event={"ID":"052c66d7-f3c6-4f4b-97e0-70e9e533308c","Type":"ContainerStarted","Data":"0aa84869e63a7b068c8633ff0cfaf9a6cba717286d178777651d1ff358b55dac"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.465996 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.466137 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.466537 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.466592 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.466710 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.466787 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.466840 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.466881 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.480034 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.480323 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.480339 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.480354 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.480365 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:47Z","lastTransitionTime":"2025-12-12T15:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.543068 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.543035659 podStartE2EDuration="2.543035659s" podCreationTimestamp="2025-12-12 15:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:47.539850701 +0000 UTC m=+105.643759362" watchObservedRunningTime="2025-12-12 15:22:47.543035659 +0000 UTC m=+105.646944300" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.829556 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.829829 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.829872 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.829902 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.829976 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.829982 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830065 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:49.830030699 +0000 UTC m=+107.933939340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830098 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:49.83007476 +0000 UTC m=+107.933983401 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830141 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.830158 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830190 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:49.830183513 +0000 UTC m=+107.934092154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830242 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830252 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830269 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830290 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830316 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830326 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830297 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:49.830291185 +0000 UTC m=+107.934199826 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830382 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:49.830371057 +0000 UTC m=+107.934279698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830460 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: E1212 15:22:47.830485 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs podName:be3e8066-7769-4174-b1af-e18146cd80c0 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:49.83047847 +0000 UTC m=+107.934387111 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs") pod "network-metrics-daemon-tpqns" (UID: "be3e8066-7769-4174-b1af-e18146cd80c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.831854 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.831881 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.831904 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.831928 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:47 crc kubenswrapper[5099]: I1212 15:22:47.831940 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:47Z","lastTransitionTime":"2025-12-12T15:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.127147 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.127198 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.127208 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.127224 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.127234 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:48Z","lastTransitionTime":"2025-12-12T15:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.199615 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=3.199593137 podStartE2EDuration="3.199593137s" podCreationTimestamp="2025-12-12 15:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:48.155369886 +0000 UTC m=+106.259278527" watchObservedRunningTime="2025-12-12 15:22:48.199593137 +0000 UTC m=+106.303501778" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.200182 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.200167181 podStartE2EDuration="3.200167181s" podCreationTimestamp="2025-12-12 15:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:48.197342131 +0000 UTC m=+106.301250772" watchObservedRunningTime="2025-12-12 15:22:48.200167181 +0000 UTC m=+106.304075822" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.229144 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.229186 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.229244 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.229263 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.229274 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:48Z","lastTransitionTime":"2025-12-12T15:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.337775 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.337825 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.337840 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.337858 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.337880 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:48Z","lastTransitionTime":"2025-12-12T15:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.517211 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.517811 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.518030 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.518163 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.518236 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:48Z","lastTransitionTime":"2025-12-12T15:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.524722 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=3.524705249 podStartE2EDuration="3.524705249s" podCreationTimestamp="2025-12-12 15:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:48.483868752 +0000 UTC m=+106.587777413" watchObservedRunningTime="2025-12-12 15:22:48.524705249 +0000 UTC m=+106.628613890" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.525783 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.527684 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2sj6" event={"ID":"76a2810e-710e-4f57-90b7-23d7bdfea6d8","Type":"ContainerStarted","Data":"249a7eb4b7d07ea613a98c70245960a8bfaf3ad27af9656d70abc8520710242a"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.529712 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"86e278ccebe436bdf8a7314a37aed16034bbdccf2825d1c8f2723e5536b362f8"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.531825 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerStarted","Data":"441959c11d240c12fd1bce8b047a3f77bd9a396e98cf620a82b240d5fe8f65c8"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.533610 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" event={"ID":"052c66d7-f3c6-4f4b-97e0-70e9e533308c","Type":"ContainerStarted","Data":"453368b5a00381fd4926645b701f4fd9842e83e204725f69ad50faa4122d2e6f"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.558553 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-g4h65" podStartSLOduration=82.558530634 podStartE2EDuration="1m22.558530634s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:48.558051452 +0000 UTC m=+106.661960093" watchObservedRunningTime="2025-12-12 15:22:48.558530634 +0000 UTC m=+106.662439275" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.620485 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.620535 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.620545 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.620561 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.620570 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:48Z","lastTransitionTime":"2025-12-12T15:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.628327 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podStartSLOduration=82.628307636 podStartE2EDuration="1m22.628307636s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:48.626396819 +0000 UTC m=+106.730305460" watchObservedRunningTime="2025-12-12 15:22:48.628307636 +0000 UTC m=+106.732216277" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.649902 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g2sj6" podStartSLOduration=82.649882818 podStartE2EDuration="1m22.649882818s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:48.649806956 +0000 UTC m=+106.753715617" watchObservedRunningTime="2025-12-12 15:22:48.649882818 +0000 UTC m=+106.753791459" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.727007 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.727058 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.727081 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.727114 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.727128 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:48Z","lastTransitionTime":"2025-12-12T15:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.964901 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.964960 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.964970 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.964986 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:48 crc kubenswrapper[5099]: I1212 15:22:48.964995 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:48Z","lastTransitionTime":"2025-12-12T15:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.066974 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.067026 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.067038 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.067055 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.067068 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:49Z","lastTransitionTime":"2025-12-12T15:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.171067 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.171121 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.171135 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.171155 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.171169 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:49Z","lastTransitionTime":"2025-12-12T15:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.273921 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.273992 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.274006 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.274025 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.274062 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:49Z","lastTransitionTime":"2025-12-12T15:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.410869 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.410912 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.410924 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.410940 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:49 crc kubenswrapper[5099]: I1212 15:22:49.410952 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:49Z","lastTransitionTime":"2025-12-12T15:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.074883 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.075041 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.075099 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.075125 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.075141 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.075164 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075352 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:54.075284585 +0000 UTC m=+112.179193226 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075563 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075594 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075619 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075644 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075658 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075682 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075739 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:54.075731686 +0000 UTC m=+112.179640327 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075754 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:54.075748116 +0000 UTC m=+112.179656757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075791 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075824 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:54.075819108 +0000 UTC m=+112.179727739 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075881 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075908 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:54.07589609 +0000 UTC m=+112.179804731 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075566 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.075930 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs podName:be3e8066-7769-4174-b1af-e18146cd80c0 nodeName:}" failed. No retries permitted until 2025-12-12 15:22:54.075925861 +0000 UTC m=+112.179834502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs") pod "network-metrics-daemon-tpqns" (UID: "be3e8066-7769-4174-b1af-e18146cd80c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.076858 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.076879 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.077058 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.076858 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.077326 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.077426 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.077518 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:22:50 crc kubenswrapper[5099]: E1212 15:22:50.077596 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.083503 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.083535 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.083550 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.083564 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.083587 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:50Z","lastTransitionTime":"2025-12-12T15:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.090215 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.090256 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.091491 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jvkrf" event={"ID":"a5f848ed-876f-4b53-83dc-189cf18f5411","Type":"ContainerStarted","Data":"d40c2f050b4054d618776fc98e03821b835f30798f080ace3095744e8adbdd54"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.097759 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" event={"ID":"052c66d7-f3c6-4f4b-97e0-70e9e533308c","Type":"ContainerStarted","Data":"9030538b595a5e84a0ed1bdc56f6c85298752b7926790fcd28916206a603082f"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.201300 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.201334 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.201343 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.201356 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.201365 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:50Z","lastTransitionTime":"2025-12-12T15:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.206279 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jvkrf" podStartSLOduration=85.206249913 podStartE2EDuration="1m25.206249913s" podCreationTimestamp="2025-12-12 15:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:50.154735484 +0000 UTC m=+108.258644145" watchObservedRunningTime="2025-12-12 15:22:50.206249913 +0000 UTC m=+108.310158554" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.207897 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" podStartSLOduration=84.207885704 podStartE2EDuration="1m24.207885704s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:50.206565531 +0000 UTC m=+108.310474172" watchObservedRunningTime="2025-12-12 15:22:50.207885704 +0000 UTC m=+108.311794345" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.366123 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.366173 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.366183 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.366205 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.366214 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:50Z","lastTransitionTime":"2025-12-12T15:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.480325 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.480363 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.480375 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.480390 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.480400 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:50Z","lastTransitionTime":"2025-12-12T15:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.583716 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.583763 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.583775 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.583794 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.583807 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:50Z","lastTransitionTime":"2025-12-12T15:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.688893 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.688931 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.688941 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.688955 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.688972 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:50Z","lastTransitionTime":"2025-12-12T15:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.793544 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.793601 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.793617 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.793636 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.793658 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:50Z","lastTransitionTime":"2025-12-12T15:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.895738 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.895771 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.895779 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.895791 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:50 crc kubenswrapper[5099]: I1212 15:22:50.895801 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:50Z","lastTransitionTime":"2025-12-12T15:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.126261 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.126300 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.126322 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.126336 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.126345 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:51Z","lastTransitionTime":"2025-12-12T15:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.177804 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf0434dc-0e1b-4efe-841c-9462c3097a2c" containerID="441959c11d240c12fd1bce8b047a3f77bd9a396e98cf620a82b240d5fe8f65c8" exitCode=0 Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.177904 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerDied","Data":"441959c11d240c12fd1bce8b047a3f77bd9a396e98cf620a82b240d5fe8f65c8"} Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.180606 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.301919 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.302009 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.302035 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.302064 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.302083 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:51Z","lastTransitionTime":"2025-12-12T15:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.477157 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.477211 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:51 crc kubenswrapper[5099]: E1212 15:22:51.477272 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:22:51 crc kubenswrapper[5099]: E1212 15:22:51.477380 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.477458 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:51 crc kubenswrapper[5099]: E1212 15:22:51.477536 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.485574 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.485652 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.485695 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.485712 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.485733 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:51Z","lastTransitionTime":"2025-12-12T15:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.598905 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.598992 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.599008 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.599025 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.599037 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:51Z","lastTransitionTime":"2025-12-12T15:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.703275 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.703315 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.703326 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.703353 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.703361 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:51Z","lastTransitionTime":"2025-12-12T15:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.871475 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.871523 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.871532 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.871548 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:51 crc kubenswrapper[5099]: I1212 15:22:51.871560 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:51Z","lastTransitionTime":"2025-12-12T15:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.047106 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.047147 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.047157 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.047172 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.047182 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.178592 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.178869 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.178882 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.178899 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.178913 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.188751 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.188795 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.280953 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.280995 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.281007 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.281032 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.281047 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.383564 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.383651 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.383733 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.383772 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.383796 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.469356 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:52 crc kubenswrapper[5099]: E1212 15:22:52.469563 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.499708 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.499769 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.499780 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.499795 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.499806 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.637519 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.637566 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.637579 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.637601 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.637610 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.739712 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.739762 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.739797 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.739814 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.739825 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.855313 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.855619 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.855630 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.855646 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.855657 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.958337 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.958401 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.958418 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.958437 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:52 crc kubenswrapper[5099]: I1212 15:22:52.958457 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:52Z","lastTransitionTime":"2025-12-12T15:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.114927 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.114981 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.114995 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.115014 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.115027 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.199640 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerStarted","Data":"1b92116e8e2c5b3bae150f3c818037490ac2e5b552739d0223b29008b557ddad"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.217540 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.217586 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.217607 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.217625 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.217649 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.320044 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.320109 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.320121 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.320139 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.320151 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.422210 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.422270 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.422284 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.422314 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.422328 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.465832 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.465832 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.465911 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:53 crc kubenswrapper[5099]: E1212 15:22:53.466174 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:22:53 crc kubenswrapper[5099]: E1212 15:22:53.466240 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:22:53 crc kubenswrapper[5099]: E1212 15:22:53.466253 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.525274 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.525577 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.525782 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.525898 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.526001 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.641585 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.641998 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.642161 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.642309 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.642442 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.744739 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.744787 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.744796 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.744809 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.744820 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.846337 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.846371 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.846379 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.846392 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.846402 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.972285 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.972340 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.972350 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.972366 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:53 crc kubenswrapper[5099]: I1212 15:22:53.972378 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:53Z","lastTransitionTime":"2025-12-12T15:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.074333 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.074385 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.074400 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.074417 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.074430 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:54Z","lastTransitionTime":"2025-12-12T15:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.167031 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.167399 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:02.167353994 +0000 UTC m=+120.271262655 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.167564 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.167652 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.167718 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.167781 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.167849 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168038 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168070 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168088 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168199 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:02.168147684 +0000 UTC m=+120.272056345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168740 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168802 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs podName:be3e8066-7769-4174-b1af-e18146cd80c0 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:02.16878604 +0000 UTC m=+120.272694701 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs") pod "network-metrics-daemon-tpqns" (UID: "be3e8066-7769-4174-b1af-e18146cd80c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168850 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168891 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:02.168879082 +0000 UTC m=+120.272787733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168952 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.168991 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:02.168980084 +0000 UTC m=+120.272888745 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.169125 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.169149 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.169162 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.169206 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:02.16919138 +0000 UTC m=+120.273100041 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.179566 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.179864 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.179997 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.180094 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.180180 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:54Z","lastTransitionTime":"2025-12-12T15:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.205304 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.207209 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"05ff25d047e3c03169172cf970761589c7b314265979559657be71deeb9228d7"} Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.282765 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.282821 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.282833 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.282852 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.282865 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:54Z","lastTransitionTime":"2025-12-12T15:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.384821 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.385131 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.385227 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.385306 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.385385 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:54Z","lastTransitionTime":"2025-12-12T15:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.466898 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:54 crc kubenswrapper[5099]: E1212 15:22:54.467093 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.487727 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.488002 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.488082 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.488196 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.488270 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:54Z","lastTransitionTime":"2025-12-12T15:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.590155 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.590578 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.590787 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.590885 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:54 crc kubenswrapper[5099]: I1212 15:22:54.590979 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:54Z","lastTransitionTime":"2025-12-12T15:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.012461 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.012527 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.012828 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.012846 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.012855 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.115164 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.115206 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.115217 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.115230 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.115244 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.217315 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.218226 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.218331 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.218399 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.218470 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.217499 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf0434dc-0e1b-4efe-841c-9462c3097a2c" containerID="1b92116e8e2c5b3bae150f3c818037490ac2e5b552739d0223b29008b557ddad" exitCode=0 Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.217530 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerDied","Data":"1b92116e8e2c5b3bae150f3c818037490ac2e5b552739d0223b29008b557ddad"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.349129 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.349164 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.349172 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.349210 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.349228 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.462126 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.462182 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.462195 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.462214 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.462227 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.466768 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:55 crc kubenswrapper[5099]: E1212 15:22:55.466902 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.467009 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:55 crc kubenswrapper[5099]: E1212 15:22:55.467125 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.468117 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:55 crc kubenswrapper[5099]: E1212 15:22:55.468472 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.567889 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.568360 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.568372 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.568386 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.568396 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.670094 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.670135 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.670145 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.670159 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.670169 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.772251 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.772312 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.772326 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.772344 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.772357 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.875399 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.875507 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.875538 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.875606 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.875629 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.978184 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.978221 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.978230 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.978243 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:55 crc kubenswrapper[5099]: I1212 15:22:55.978252 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:55Z","lastTransitionTime":"2025-12-12T15:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.080172 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.080232 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.080261 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.080280 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.080291 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:56Z","lastTransitionTime":"2025-12-12T15:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.180917 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.180972 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.180992 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.181010 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.181022 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:56Z","lastTransitionTime":"2025-12-12T15:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.204890 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.204937 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.204958 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.204975 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.204986 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:22:56Z","lastTransitionTime":"2025-12-12T15:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.229733 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerStarted","Data":"b1f54ed66a738f8d3dec22858d4b6e0af35a8e82e0f5e028289233e84792d20a"} Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.253190 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx"] Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.258241 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.260463 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.260507 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.260834 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.261652 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.343785 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.344638 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.344858 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.344985 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.345025 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.446468 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.446557 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.446608 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.446654 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.446723 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.446834 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.446897 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.448925 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.454521 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.465306 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e04cc896-bb2f-4c8f-8d81-c93fe6b126dc-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jgpnx\" (UID: \"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.468406 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:56 crc kubenswrapper[5099]: E1212 15:22:56.468711 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:22:56 crc kubenswrapper[5099]: I1212 15:22:56.707503 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" Dec 12 15:22:56 crc kubenswrapper[5099]: W1212 15:22:56.736898 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode04cc896_bb2f_4c8f_8d81_c93fe6b126dc.slice/crio-cc8e2dea486c530a42b7dca58749e75a9ad21f5ea781a3c1326a0caf9ee17dcb WatchSource:0}: Error finding container cc8e2dea486c530a42b7dca58749e75a9ad21f5ea781a3c1326a0caf9ee17dcb: Status 404 returned error can't find the container with id cc8e2dea486c530a42b7dca58749e75a9ad21f5ea781a3c1326a0caf9ee17dcb Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.128628 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.145278 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.236808 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf0434dc-0e1b-4efe-841c-9462c3097a2c" containerID="b1f54ed66a738f8d3dec22858d4b6e0af35a8e82e0f5e028289233e84792d20a" exitCode=0 Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.237021 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerDied","Data":"b1f54ed66a738f8d3dec22858d4b6e0af35a8e82e0f5e028289233e84792d20a"} Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.241613 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" event={"ID":"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc","Type":"ContainerStarted","Data":"2bb2f1441dc027116592f2ee99462fece481cd44402480d24ca3dad8d307cca7"} Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.241725 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" event={"ID":"e04cc896-bb2f-4c8f-8d81-c93fe6b126dc","Type":"ContainerStarted","Data":"cc8e2dea486c530a42b7dca58749e75a9ad21f5ea781a3c1326a0caf9ee17dcb"} Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.951179 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.951555 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:57 crc kubenswrapper[5099]: E1212 15:22:57.952904 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.952980 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.953026 5099 scope.go:117] "RemoveContainer" containerID="42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57" Dec 12 15:22:57 crc kubenswrapper[5099]: E1212 15:22:57.953088 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:22:57 crc kubenswrapper[5099]: E1212 15:22:57.953047 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.958111 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:57 crc kubenswrapper[5099]: E1212 15:22:57.958371 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.959651 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerStarted","Data":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.959776 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.959901 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:57 crc kubenswrapper[5099]: I1212 15:22:57.959968 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:58 crc kubenswrapper[5099]: I1212 15:22:57.999575 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:58 crc kubenswrapper[5099]: I1212 15:22:58.010760 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:22:58 crc kubenswrapper[5099]: I1212 15:22:58.020496 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jgpnx" podStartSLOduration=92.020464813 podStartE2EDuration="1m32.020464813s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:58.020208477 +0000 UTC m=+116.124117118" watchObservedRunningTime="2025-12-12 15:22:58.020464813 +0000 UTC m=+116.124373454" Dec 12 15:22:58 crc kubenswrapper[5099]: I1212 15:22:58.159734 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podStartSLOduration=92.159702816 podStartE2EDuration="1m32.159702816s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:58.159378918 +0000 UTC m=+116.263287569" watchObservedRunningTime="2025-12-12 15:22:58.159702816 +0000 UTC m=+116.263611457" Dec 12 15:22:59 crc kubenswrapper[5099]: I1212 15:22:59.510022 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:22:59 crc kubenswrapper[5099]: E1212 15:22:59.510244 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:22:59 crc kubenswrapper[5099]: I1212 15:22:59.510293 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:22:59 crc kubenswrapper[5099]: E1212 15:22:59.510436 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:22:59 crc kubenswrapper[5099]: I1212 15:22:59.510537 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:22:59 crc kubenswrapper[5099]: I1212 15:22:59.510537 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:59 crc kubenswrapper[5099]: E1212 15:22:59.510719 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:22:59 crc kubenswrapper[5099]: E1212 15:22:59.510814 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:22:59 crc kubenswrapper[5099]: I1212 15:22:59.971570 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 15:22:59 crc kubenswrapper[5099]: I1212 15:22:59.973120 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0a14f83889283662ebe1bbd434233932edffd27b38750ab5edcb5866da81a2b8"} Dec 12 15:23:00 crc kubenswrapper[5099]: I1212 15:23:00.027812 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:00 crc kubenswrapper[5099]: I1212 15:23:00.029366 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf0434dc-0e1b-4efe-841c-9462c3097a2c" containerID="b0260f2f1bfc3ce4f4fb61125d27f9994354b0fcee075ca1dc1d459ef5f8835f" exitCode=0 Dec 12 15:23:00 crc kubenswrapper[5099]: I1212 15:23:00.029452 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerDied","Data":"b0260f2f1bfc3ce4f4fb61125d27f9994354b0fcee075ca1dc1d459ef5f8835f"} Dec 12 15:23:00 crc kubenswrapper[5099]: I1212 15:23:00.066376 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.066356449 podStartE2EDuration="16.066356449s" podCreationTimestamp="2025-12-12 15:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:00.065786225 +0000 UTC m=+118.169694886" watchObservedRunningTime="2025-12-12 15:23:00.066356449 +0000 UTC m=+118.170265080" Dec 12 15:23:00 crc kubenswrapper[5099]: I1212 15:23:00.611799 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:00 crc kubenswrapper[5099]: E1212 15:23:00.612934 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:23:01 crc kubenswrapper[5099]: I1212 15:23:01.046857 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerStarted","Data":"36a400a20d39231cb0be46d5b544258369350d3c52f62ab46a988faa8e883a27"} Dec 12 15:23:01 crc kubenswrapper[5099]: I1212 15:23:01.466243 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:01 crc kubenswrapper[5099]: I1212 15:23:01.466309 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:01 crc kubenswrapper[5099]: E1212 15:23:01.466459 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:23:01 crc kubenswrapper[5099]: I1212 15:23:01.466775 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:01 crc kubenswrapper[5099]: E1212 15:23:01.466857 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:23:01 crc kubenswrapper[5099]: E1212 15:23:01.466931 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:23:02 crc kubenswrapper[5099]: I1212 15:23:02.239995 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:02 crc kubenswrapper[5099]: I1212 15:23:02.240110 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:02 crc kubenswrapper[5099]: I1212 15:23:02.240150 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:02 crc kubenswrapper[5099]: I1212 15:23:02.240180 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:02 crc kubenswrapper[5099]: I1212 15:23:02.240214 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240253 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:18.240218431 +0000 UTC m=+136.344127072 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240349 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: I1212 15:23:02.240357 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240366 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240459 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240468 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240525 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs podName:be3e8066-7769-4174-b1af-e18146cd80c0 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:18.240509738 +0000 UTC m=+136.344418379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs") pod "network-metrics-daemon-tpqns" (UID: "be3e8066-7769-4174-b1af-e18146cd80c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240390 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240595 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240562 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:18.240553149 +0000 UTC m=+136.344461790 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240615 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240710 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:18.240697713 +0000 UTC m=+136.344606354 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240731 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:18.240722943 +0000 UTC m=+136.344631584 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240742 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240765 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.240830 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:18.240817866 +0000 UTC m=+136.344726507 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.740567 5099 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Dec 12 15:23:02 crc kubenswrapper[5099]: I1212 15:23:02.759818 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.760019 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:23:02 crc kubenswrapper[5099]: I1212 15:23:02.762778 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.763009 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:23:02 crc kubenswrapper[5099]: E1212 15:23:02.958915 5099 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 15:23:03 crc kubenswrapper[5099]: I1212 15:23:03.465926 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:03 crc kubenswrapper[5099]: I1212 15:23:03.465931 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:03 crc kubenswrapper[5099]: E1212 15:23:03.466083 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:23:03 crc kubenswrapper[5099]: E1212 15:23:03.466189 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:23:04 crc kubenswrapper[5099]: I1212 15:23:04.470621 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:04 crc kubenswrapper[5099]: E1212 15:23:04.470851 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:23:04 crc kubenswrapper[5099]: I1212 15:23:04.471454 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:04 crc kubenswrapper[5099]: E1212 15:23:04.471589 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:23:04 crc kubenswrapper[5099]: I1212 15:23:04.552266 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tpqns"] Dec 12 15:23:05 crc kubenswrapper[5099]: I1212 15:23:05.075336 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf0434dc-0e1b-4efe-841c-9462c3097a2c" containerID="36a400a20d39231cb0be46d5b544258369350d3c52f62ab46a988faa8e883a27" exitCode=0 Dec 12 15:23:05 crc kubenswrapper[5099]: I1212 15:23:05.075481 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:05 crc kubenswrapper[5099]: E1212 15:23:05.075691 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:23:05 crc kubenswrapper[5099]: I1212 15:23:05.076168 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerDied","Data":"36a400a20d39231cb0be46d5b544258369350d3c52f62ab46a988faa8e883a27"} Dec 12 15:23:05 crc kubenswrapper[5099]: I1212 15:23:05.621022 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:05 crc kubenswrapper[5099]: E1212 15:23:05.621215 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:23:05 crc kubenswrapper[5099]: I1212 15:23:05.621519 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:05 crc kubenswrapper[5099]: E1212 15:23:05.621822 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:23:06 crc kubenswrapper[5099]: I1212 15:23:06.081686 5099 generic.go:358] "Generic (PLEG): container finished" podID="cf0434dc-0e1b-4efe-841c-9462c3097a2c" containerID="ef52358cc1a8cb649c1cadf9f23892a178f23d3a3439b228443428d5279a649b" exitCode=0 Dec 12 15:23:06 crc kubenswrapper[5099]: I1212 15:23:06.081734 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerDied","Data":"ef52358cc1a8cb649c1cadf9f23892a178f23d3a3439b228443428d5279a649b"} Dec 12 15:23:06 crc kubenswrapper[5099]: I1212 15:23:06.466737 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:06 crc kubenswrapper[5099]: E1212 15:23:06.466888 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:23:06 crc kubenswrapper[5099]: I1212 15:23:06.467194 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:06 crc kubenswrapper[5099]: E1212 15:23:06.467354 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:23:07 crc kubenswrapper[5099]: I1212 15:23:07.090758 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" event={"ID":"cf0434dc-0e1b-4efe-841c-9462c3097a2c","Type":"ContainerStarted","Data":"13cf1d42b03a40df4532d2d81c1e926935159f7ebe6a757625d9e22f78b8fb05"} Dec 12 15:23:07 crc kubenswrapper[5099]: I1212 15:23:07.512970 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:07 crc kubenswrapper[5099]: E1212 15:23:07.513297 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:23:07 crc kubenswrapper[5099]: I1212 15:23:07.513649 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:07 crc kubenswrapper[5099]: E1212 15:23:07.513742 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:23:07 crc kubenswrapper[5099]: I1212 15:23:07.513664 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:07 crc kubenswrapper[5099]: E1212 15:23:07.513945 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:23:07 crc kubenswrapper[5099]: I1212 15:23:07.933962 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5q9gc" podStartSLOduration=101.933934753 podStartE2EDuration="1m41.933934753s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:07.932376265 +0000 UTC m=+126.036284916" watchObservedRunningTime="2025-12-12 15:23:07.933934753 +0000 UTC m=+126.037843404" Dec 12 15:23:07 crc kubenswrapper[5099]: E1212 15:23:07.961315 5099 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 15:23:08 crc kubenswrapper[5099]: I1212 15:23:08.466840 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:08 crc kubenswrapper[5099]: E1212 15:23:08.467051 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:23:09 crc kubenswrapper[5099]: I1212 15:23:09.466793 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:09 crc kubenswrapper[5099]: I1212 15:23:09.466813 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:09 crc kubenswrapper[5099]: E1212 15:23:09.466952 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:23:09 crc kubenswrapper[5099]: I1212 15:23:09.466819 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:09 crc kubenswrapper[5099]: E1212 15:23:09.467059 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:23:09 crc kubenswrapper[5099]: E1212 15:23:09.467148 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:23:10 crc kubenswrapper[5099]: I1212 15:23:10.466005 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:10 crc kubenswrapper[5099]: E1212 15:23:10.466486 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:23:11 crc kubenswrapper[5099]: I1212 15:23:11.055297 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:11 crc kubenswrapper[5099]: I1212 15:23:11.466116 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:11 crc kubenswrapper[5099]: I1212 15:23:11.466116 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:11 crc kubenswrapper[5099]: E1212 15:23:11.466317 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:23:11 crc kubenswrapper[5099]: E1212 15:23:11.466406 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:23:11 crc kubenswrapper[5099]: I1212 15:23:11.466121 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:11 crc kubenswrapper[5099]: E1212 15:23:11.466480 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:23:12 crc kubenswrapper[5099]: I1212 15:23:12.467274 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:12 crc kubenswrapper[5099]: E1212 15:23:12.467401 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tpqns" podUID="be3e8066-7769-4174-b1af-e18146cd80c0" Dec 12 15:23:13 crc kubenswrapper[5099]: I1212 15:23:13.466731 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:13 crc kubenswrapper[5099]: I1212 15:23:13.466786 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:13 crc kubenswrapper[5099]: I1212 15:23:13.466773 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:13 crc kubenswrapper[5099]: I1212 15:23:13.469492 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 15:23:13 crc kubenswrapper[5099]: I1212 15:23:13.469833 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 15:23:13 crc kubenswrapper[5099]: I1212 15:23:13.470194 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 15:23:13 crc kubenswrapper[5099]: I1212 15:23:13.470420 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 15:23:14 crc kubenswrapper[5099]: I1212 15:23:14.466342 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:14 crc kubenswrapper[5099]: I1212 15:23:14.469498 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 15:23:14 crc kubenswrapper[5099]: I1212 15:23:14.470535 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 15:23:16 crc kubenswrapper[5099]: I1212 15:23:16.977257 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 12 15:23:17 crc kubenswrapper[5099]: I1212 15:23:17.033222 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-48xth"] Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.270855 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:18 crc kubenswrapper[5099]: E1212 15:23:18.270953 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.270935797 +0000 UTC m=+168.374844438 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.271173 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.271254 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.271310 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.272567 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.272612 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.280396 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.281473 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.294816 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.300808 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be3e8066-7769-4174-b1af-e18146cd80c0-metrics-certs\") pod \"network-metrics-daemon-tpqns\" (UID: \"be3e8066-7769-4174-b1af-e18146cd80c0\") " pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.302773 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.379785 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tpqns" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.588409 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.611021 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.895698 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.908330 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6ldwt"] Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.908554 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.921103 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.921397 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.921822 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.922034 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.923974 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jf6f9"] Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.924789 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.928248 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.930200 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.931289 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.972902 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.974160 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.974472 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.974737 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.974946 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.975202 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.976103 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.977542 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.978854 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.979022 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6ceb95-e500-4c75-b79f-135276dd6854-config\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.979097 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-tmp\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.979131 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c6ceb95-e500-4c75-b79f-135276dd6854-images\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.979203 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-client-ca\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.979317 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjvl\" (UniqueName: \"kubernetes.io/projected/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-kube-api-access-jjjvl\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.991292 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.994990 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91b03c8a-85d9-4774-bdfe-87d41eace7ca-available-featuregates\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.995272 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjrp\" (UniqueName: \"kubernetes.io/projected/91b03c8a-85d9-4774-bdfe-87d41eace7ca-kube-api-access-znjrp\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:18 crc kubenswrapper[5099]: I1212 15:23:18.996692 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.006600 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-serving-cert\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.007016 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.010267 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d54wb\" (UniqueName: \"kubernetes.io/projected/3c6ceb95-e500-4c75-b79f-135276dd6854-kube-api-access-d54wb\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.010327 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b03c8a-85d9-4774-bdfe-87d41eace7ca-serving-cert\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.010357 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-config\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.010382 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.010406 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c6ceb95-e500-4c75-b79f-135276dd6854-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.021106 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-87m2r"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.021793 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.026196 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.026476 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.026779 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.027077 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.029965 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.030135 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.033399 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-9d2g2"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.033550 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.038115 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.038201 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.038546 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-kntm5"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.038558 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.038624 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.038704 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.038850 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.039400 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.039427 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.039541 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.077313 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.081742 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.084620 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.088028 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-9bfsz"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.091258 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-xpdmq"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.093780 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.097173 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-6g4jq"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.102507 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.107701 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.110562 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-g59fk"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.110841 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.111440 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117027 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6ceb95-e500-4c75-b79f-135276dd6854-config\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117097 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-tmp\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117117 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c6ceb95-e500-4c75-b79f-135276dd6854-images\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117133 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-client-ca\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117180 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjvl\" (UniqueName: \"kubernetes.io/projected/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-kube-api-access-jjjvl\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117206 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91b03c8a-85d9-4774-bdfe-87d41eace7ca-available-featuregates\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117221 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-znjrp\" (UniqueName: \"kubernetes.io/projected/91b03c8a-85d9-4774-bdfe-87d41eace7ca-kube-api-access-znjrp\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117275 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-serving-cert\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117293 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d54wb\" (UniqueName: \"kubernetes.io/projected/3c6ceb95-e500-4c75-b79f-135276dd6854-kube-api-access-d54wb\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117310 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b03c8a-85d9-4774-bdfe-87d41eace7ca-serving-cert\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117359 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-config\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117376 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.117393 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c6ceb95-e500-4c75-b79f-135276dd6854-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.119290 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91b03c8a-85d9-4774-bdfe-87d41eace7ca-available-featuregates\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.119916 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.120550 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6ceb95-e500-4c75-b79f-135276dd6854-config\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.120696 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.120898 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.121008 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-tmp\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.121827 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c6ceb95-e500-4c75-b79f-135276dd6854-images\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.122553 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-client-ca\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.124041 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.124208 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.128091 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.128271 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.128534 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.128811 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.129038 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.129318 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.129460 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.129688 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.129870 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.130000 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.130185 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.133821 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.133996 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.136071 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-config\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.138638 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.138954 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.139143 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.139285 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.139418 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.139524 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.139704 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.139917 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.176892 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.178104 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-serving-cert\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.182350 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.188797 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b03c8a-85d9-4774-bdfe-87d41eace7ca-serving-cert\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.188879 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c6ceb95-e500-4c75-b79f-135276dd6854-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.190557 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.191101 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.191499 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.194829 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.195323 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.196092 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.206288 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.206405 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.207123 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.207735 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.208304 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.209988 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d54wb\" (UniqueName: \"kubernetes.io/projected/3c6ceb95-e500-4c75-b79f-135276dd6854-kube-api-access-d54wb\") pod \"machine-api-operator-755bb95488-6ldwt\" (UID: \"3c6ceb95-e500-4c75-b79f-135276dd6854\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.215477 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjrp\" (UniqueName: \"kubernetes.io/projected/91b03c8a-85d9-4774-bdfe-87d41eace7ca-kube-api-access-znjrp\") pod \"openshift-config-operator-5777786469-jf6f9\" (UID: \"91b03c8a-85d9-4774-bdfe-87d41eace7ca\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.218511 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/c8810bcc-64c4-4f66-aa84-7f195007922e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.218554 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-audit-dir\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.218576 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8810bcc-64c4-4f66-aa84-7f195007922e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.218679 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-node-pullsecrets\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.218701 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7726b6bd-36f7-4478-91ad-1fa75a6da808-tmp-dir\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.218835 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8810bcc-64c4-4f66-aa84-7f195007922e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.218957 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-console-config\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219034 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-config\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219090 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-auth-proxy-config\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219128 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-etcd-client\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219188 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-ca\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219209 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6zz4\" (UniqueName: \"kubernetes.io/projected/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-kube-api-access-n6zz4\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219282 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-config\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219339 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshwp\" (UniqueName: \"kubernetes.io/projected/2d338ea8-c48b-425e-8839-7d9d05af01d7-kube-api-access-jshwp\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219437 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8619046-af5b-41ea-b50f-ed757800ab47-config\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219513 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6fjz\" (UniqueName: \"kubernetes.io/projected/c43456a5-e138-46ad-bbc2-1b7c25526806-kube-api-access-d6fjz\") pod \"cluster-samples-operator-6b564684c8-s9spr\" (UID: \"c43456a5-e138-46ad-bbc2-1b7c25526806\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219548 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8810bcc-64c4-4f66-aa84-7f195007922e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219605 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8810bcc-64c4-4f66-aa84-7f195007922e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219629 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-config\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219705 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgqpj\" (UniqueName: \"kubernetes.io/projected/c8810bcc-64c4-4f66-aa84-7f195007922e-kube-api-access-cgqpj\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219756 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnj7s\" (UniqueName: \"kubernetes.io/projected/1c0b0461-cdc7-4453-9766-eb3dc5385423-kube-api-access-pnj7s\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219782 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8619046-af5b-41ea-b50f-ed757800ab47-serving-cert\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219799 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219842 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-machine-approver-tls\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219867 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqthf\" (UniqueName: \"kubernetes.io/projected/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-kube-api-access-mqthf\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.219936 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220011 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8619046-af5b-41ea-b50f-ed757800ab47-trusted-ca\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220030 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsghk\" (UniqueName: \"kubernetes.io/projected/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-kube-api-access-xsghk\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220048 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-audit-policies\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220418 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxwx\" (UniqueName: \"kubernetes.io/projected/aa68d84c-a712-4979-afe1-bdb4f8329372-kube-api-access-6jxwx\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220463 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-trusted-ca-bundle\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220520 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-config\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220602 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220625 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8hxv\" (UniqueName: \"kubernetes.io/projected/900d9c74-c186-4e65-8d3d-5f282ced8617-kube-api-access-v8hxv\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220699 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d338ea8-c48b-425e-8839-7d9d05af01d7-serving-cert\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220798 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-tmp-dir\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220814 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kncv\" (UniqueName: \"kubernetes.io/projected/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-kube-api-access-2kncv\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220845 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0b0461-cdc7-4453-9766-eb3dc5385423-serving-cert\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.220863 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7726b6bd-36f7-4478-91ad-1fa75a6da808-serving-cert\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221345 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-encryption-config\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221387 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-audit\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221424 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-client-ca\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221448 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa68d84c-a712-4979-afe1-bdb4f8329372-console-oauth-config\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221477 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-etcd-client\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221509 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-serving-cert\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221534 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/900d9c74-c186-4e65-8d3d-5f282ced8617-audit-dir\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221586 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-config\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221601 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221648 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221689 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-metrics-tls\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221749 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c0b0461-cdc7-4453-9766-eb3dc5385423-tmp\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221789 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdk7w\" (UniqueName: \"kubernetes.io/projected/7726b6bd-36f7-4478-91ad-1fa75a6da808-kube-api-access-jdk7w\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221807 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-etcd-serving-ca\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221853 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjb8s\" (UniqueName: \"kubernetes.io/projected/c8619046-af5b-41ea-b50f-ed757800ab47-kube-api-access-fjb8s\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221878 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-client\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221906 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221940 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-encryption-config\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.221958 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa68d84c-a712-4979-afe1-bdb4f8329372-console-serving-cert\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.222008 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.222040 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c43456a5-e138-46ad-bbc2-1b7c25526806-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-s9spr\" (UID: \"c43456a5-e138-46ad-bbc2-1b7c25526806\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.222059 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-service-ca\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.222074 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lkfc\" (UniqueName: \"kubernetes.io/projected/42d147e5-750e-4c46-bb7a-e99a34fca2f9-kube-api-access-9lkfc\") pod \"downloads-747b44746d-87m2r\" (UID: \"42d147e5-750e-4c46-bb7a-e99a34fca2f9\") " pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.222095 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-image-import-ca\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.222127 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-config\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.222144 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-oauth-serving-cert\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.222159 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-serving-cert\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.309224 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.309424 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.309526 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.309646 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.309755 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.310478 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.311075 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.311330 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.311525 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.311653 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.316753 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.317507 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.317647 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.319429 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.321922 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjvl\" (UniqueName: \"kubernetes.io/projected/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-kube-api-access-jjjvl\") pod \"controller-manager-65b6cccf98-48xth\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.339019 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.339248 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.339362 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.341022 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.342193 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.343039 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.343985 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.344509 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.344845 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.345090 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.346811 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.347145 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.347843 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.347998 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.348326 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.348587 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.348809 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352327 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352433 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-etcd-client\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352471 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-ca\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352498 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n6zz4\" (UniqueName: \"kubernetes.io/projected/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-kube-api-access-n6zz4\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352523 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-config\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352539 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jshwp\" (UniqueName: \"kubernetes.io/projected/2d338ea8-c48b-425e-8839-7d9d05af01d7-kube-api-access-jshwp\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352559 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8619046-af5b-41ea-b50f-ed757800ab47-config\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352575 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d6fjz\" (UniqueName: \"kubernetes.io/projected/c43456a5-e138-46ad-bbc2-1b7c25526806-kube-api-access-d6fjz\") pod \"cluster-samples-operator-6b564684c8-s9spr\" (UID: \"c43456a5-e138-46ad-bbc2-1b7c25526806\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352596 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8810bcc-64c4-4f66-aa84-7f195007922e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352613 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8810bcc-64c4-4f66-aa84-7f195007922e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352630 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-config\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352646 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cgqpj\" (UniqueName: \"kubernetes.io/projected/c8810bcc-64c4-4f66-aa84-7f195007922e-kube-api-access-cgqpj\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352680 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pnj7s\" (UniqueName: \"kubernetes.io/projected/1c0b0461-cdc7-4453-9766-eb3dc5385423-kube-api-access-pnj7s\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352699 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8619046-af5b-41ea-b50f-ed757800ab47-serving-cert\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352715 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352730 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-machine-approver-tls\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352747 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mqthf\" (UniqueName: \"kubernetes.io/projected/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-kube-api-access-mqthf\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352763 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352789 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8619046-af5b-41ea-b50f-ed757800ab47-trusted-ca\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352808 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xsghk\" (UniqueName: \"kubernetes.io/projected/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-kube-api-access-xsghk\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352838 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-audit-policies\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352899 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxwx\" (UniqueName: \"kubernetes.io/projected/aa68d84c-a712-4979-afe1-bdb4f8329372-kube-api-access-6jxwx\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352930 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-trusted-ca-bundle\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.352957 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.353331 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.355539 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.355722 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.355885 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356016 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356073 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-config\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356138 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356171 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v8hxv\" (UniqueName: \"kubernetes.io/projected/900d9c74-c186-4e65-8d3d-5f282ced8617-kube-api-access-v8hxv\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356198 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d338ea8-c48b-425e-8839-7d9d05af01d7-serving-cert\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356222 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-tmp-dir\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356248 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2kncv\" (UniqueName: \"kubernetes.io/projected/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-kube-api-access-2kncv\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356254 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356289 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0b0461-cdc7-4453-9766-eb3dc5385423-serving-cert\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356314 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7726b6bd-36f7-4478-91ad-1fa75a6da808-serving-cert\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356351 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-encryption-config\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356380 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-audit\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356436 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356570 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356756 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356936 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.356410 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-client-ca\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357178 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa68d84c-a712-4979-afe1-bdb4f8329372-console-oauth-config\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357197 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-etcd-client\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357222 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-serving-cert\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357239 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/900d9c74-c186-4e65-8d3d-5f282ced8617-audit-dir\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357268 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-config\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357284 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357323 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357345 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-metrics-tls\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357378 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c0b0461-cdc7-4453-9766-eb3dc5385423-tmp\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357395 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jdk7w\" (UniqueName: \"kubernetes.io/projected/7726b6bd-36f7-4478-91ad-1fa75a6da808-kube-api-access-jdk7w\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357412 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-etcd-serving-ca\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357442 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjb8s\" (UniqueName: \"kubernetes.io/projected/c8619046-af5b-41ea-b50f-ed757800ab47-kube-api-access-fjb8s\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357458 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-client\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357475 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357499 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-encryption-config\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357515 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa68d84c-a712-4979-afe1-bdb4f8329372-console-serving-cert\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357536 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357553 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c43456a5-e138-46ad-bbc2-1b7c25526806-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-s9spr\" (UID: \"c43456a5-e138-46ad-bbc2-1b7c25526806\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357570 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-service-ca\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357587 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lkfc\" (UniqueName: \"kubernetes.io/projected/42d147e5-750e-4c46-bb7a-e99a34fca2f9-kube-api-access-9lkfc\") pod \"downloads-747b44746d-87m2r\" (UID: \"42d147e5-750e-4c46-bb7a-e99a34fca2f9\") " pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357589 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-client-ca\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357605 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-image-import-ca\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.357626 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-config\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.358781 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.362903 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-etcd-client\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.365101 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.365258 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370554 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-oauth-serving-cert\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370654 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-serving-cert\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370741 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/c8810bcc-64c4-4f66-aa84-7f195007922e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370777 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-audit-dir\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370811 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8810bcc-64c4-4f66-aa84-7f195007922e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370876 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-node-pullsecrets\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370918 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7726b6bd-36f7-4478-91ad-1fa75a6da808-tmp-dir\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370949 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8810bcc-64c4-4f66-aa84-7f195007922e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.370994 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-console-config\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.371026 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-config\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.371060 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-auth-proxy-config\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.371927 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.372907 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.373557 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-ca\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.374842 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.376539 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c0b0461-cdc7-4453-9766-eb3dc5385423-tmp\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.377129 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-etcd-serving-ca\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.378251 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8619046-af5b-41ea-b50f-ed757800ab47-config\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.378267 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-config\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.378783 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8810bcc-64c4-4f66-aa84-7f195007922e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.379323 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.380098 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-config\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.380904 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.381280 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-oauth-serving-cert\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.381649 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.382400 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.386033 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.387159 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.387813 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.388377 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-image-import-ca\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.389654 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-auth-proxy-config\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.389901 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8619046-af5b-41ea-b50f-ed757800ab47-serving-cert\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.390568 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/c8810bcc-64c4-4f66-aa84-7f195007922e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.390644 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-audit-dir\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.390952 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-service-ca\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.391487 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/900d9c74-c186-4e65-8d3d-5f282ced8617-audit-dir\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.391740 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.396890 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-config\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.397128 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.397472 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.397915 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.398099 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-console-config\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.398222 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.398949 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-node-pullsecrets\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.399329 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7726b6bd-36f7-4478-91ad-1fa75a6da808-tmp-dir\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.399510 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-config\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.400055 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-audit\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.400595 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-serving-cert\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.401356 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.411240 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-bgl7b"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.413964 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.420531 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.423486 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.425784 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.428294 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.430517 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.432521 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8810bcc-64c4-4f66-aa84-7f195007922e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.432695 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.436967 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-gmbdh"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.462190 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.462501 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.466405 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0b0461-cdc7-4453-9766-eb3dc5385423-serving-cert\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.468127 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-tmp-dir\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.469203 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-encryption-config\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.469520 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d338ea8-c48b-425e-8839-7d9d05af01d7-serving-cert\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.469865 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7726b6bd-36f7-4478-91ad-1fa75a6da808-serving-cert\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.470078 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-encryption-config\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.470470 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7726b6bd-36f7-4478-91ad-1fa75a6da808-etcd-client\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.470475 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa68d84c-a712-4979-afe1-bdb4f8329372-console-oauth-config\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.470914 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-machine-approver-tls\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.471015 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-serving-cert\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.472020 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa68d84c-a712-4979-afe1-bdb4f8329372-console-serving-cert\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.472555 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-config\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.473575 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c43456a5-e138-46ad-bbc2-1b7c25526806-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-s9spr\" (UID: \"c43456a5-e138-46ad-bbc2-1b7c25526806\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.474517 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/900d9c74-c186-4e65-8d3d-5f282ced8617-etcd-client\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.475109 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.475360 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.475568 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.475757 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.476813 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/900d9c74-c186-4e65-8d3d-5f282ced8617-audit-policies\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.477395 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8810bcc-64c4-4f66-aa84-7f195007922e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.479146 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa68d84c-a712-4979-afe1-bdb4f8329372-trusted-ca-bundle\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.479617 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7726b6bd-36f7-4478-91ad-1fa75a6da808-config\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.480115 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-metrics-tls\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.481399 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.482166 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.485207 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8619046-af5b-41ea-b50f-ed757800ab47-trusted-ca\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.485755 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.487112 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.490228 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.491599 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.499524 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.500049 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.509235 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d338ea8-c48b-425e-8839-7d9d05af01d7-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.520254 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.521231 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgqpj\" (UniqueName: \"kubernetes.io/projected/c8810bcc-64c4-4f66-aa84-7f195007922e-kube-api-access-cgqpj\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.521426 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:19 crc kubenswrapper[5099]: W1212 15:23:19.523776 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-607e453c133ebbf792b6bc8515c8632ed17a8da727d6abcacaa714a0cd036994 WatchSource:0}: Error finding container 607e453c133ebbf792b6bc8515c8632ed17a8da727d6abcacaa714a0cd036994: Status 404 returned error can't find the container with id 607e453c133ebbf792b6bc8515c8632ed17a8da727d6abcacaa714a0cd036994 Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.527074 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-48xth"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.527102 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.527452 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.527845 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnj7s\" (UniqueName: \"kubernetes.io/projected/1c0b0461-cdc7-4453-9766-eb3dc5385423-kube-api-access-pnj7s\") pod \"route-controller-manager-776cdc94d6-22f86\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: W1212 15:23:19.534199 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe3e8066_7769_4174_b1af_e18146cd80c0.slice/crio-d36d9ba0da729e5b0cbe297cbe99e4e50c9832b549c2af8aee3d7aeb54463924 WatchSource:0}: Error finding container d36d9ba0da729e5b0cbe297cbe99e4e50c9832b549c2af8aee3d7aeb54463924: Status 404 returned error can't find the container with id d36d9ba0da729e5b0cbe297cbe99e4e50c9832b549c2af8aee3d7aeb54463924 Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.536144 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6ldwt"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.536659 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.538859 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.550500 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-k2mn9"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.551725 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.554019 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdk7w\" (UniqueName: \"kubernetes.io/projected/7726b6bd-36f7-4478-91ad-1fa75a6da808-kube-api-access-jdk7w\") pod \"etcd-operator-69b85846b6-cvp2b\" (UID: \"7726b6bd-36f7-4478-91ad-1fa75a6da808\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.557824 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-lgbxs"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.558287 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.562624 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmnlp"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.562726 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.563475 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.569055 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6zz4\" (UniqueName: \"kubernetes.io/projected/0fd8dbd3-0ba2-43fb-8364-7d98167be1b8-kube-api-access-n6zz4\") pod \"openshift-apiserver-operator-846cbfc458-2rzj9\" (UID: \"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.578645 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.584803 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-trusted-ca\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.584865 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a8221fb7-b435-4a06-8a6d-7bcc4afda383-ca-trust-extracted\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.584891 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-bound-sa-token\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.584919 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-certificates\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.584927 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.584804 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.584934 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a8221fb7-b435-4a06-8a6d-7bcc4afda383-installation-pull-secrets\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.585281 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.585415 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d9zl\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-kube-api-access-9d9zl\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.585604 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:19 crc kubenswrapper[5099]: E1212 15:23:19.586503 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.086480402 +0000 UTC m=+138.190389043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.586802 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-tls\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.589889 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.590351 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshwp\" (UniqueName: \"kubernetes.io/projected/2d338ea8-c48b-425e-8839-7d9d05af01d7-kube-api-access-jshwp\") pod \"authentication-operator-7f5c659b84-9jxc6\" (UID: \"2d338ea8-c48b-425e-8839-7d9d05af01d7\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.591954 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593346 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593375 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-87m2r"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593388 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jf6f9"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593415 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593440 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-9bfsz"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593451 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593460 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593468 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-g59fk"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593475 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-6g4jq"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593483 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593494 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6k74d"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.593887 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.596442 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jhxlt"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.600385 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h686"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.600578 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.601431 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.608897 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-5wjhf"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.609193 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.621503 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6fjz\" (UniqueName: \"kubernetes.io/projected/c43456a5-e138-46ad-bbc2-1b7c25526806-kube-api-access-d6fjz\") pod \"cluster-samples-operator-6b564684c8-s9spr\" (UID: \"c43456a5-e138-46ad-bbc2-1b7c25526806\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.621918 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tkw9f"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.622079 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626187 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626244 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626260 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626275 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626287 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626471 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626850 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626878 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626891 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626900 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmnlp"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626912 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626923 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-gmbdh"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626932 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626943 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-9d2g2"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626952 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626961 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626970 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626979 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6k74d"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.626988 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-xpdmq"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627000 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627009 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-k2mn9"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627017 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627027 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-lgbxs"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627035 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627043 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627060 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627068 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h686"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627081 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tkw9f"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.627105 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tpqns"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.647618 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.654727 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lkfc\" (UniqueName: \"kubernetes.io/projected/42d147e5-750e-4c46-bb7a-e99a34fca2f9-kube-api-access-9lkfc\") pod \"downloads-747b44746d-87m2r\" (UID: \"42d147e5-750e-4c46-bb7a-e99a34fca2f9\") " pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.662562 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.672240 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8810bcc-64c4-4f66-aa84-7f195007922e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-vtqng\" (UID: \"c8810bcc-64c4-4f66-aa84-7f195007922e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: W1212 15:23:19.683464 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-abd6db95a4613bde3b70db0b98e2c80b8f52383de001a71e290a0b5dbb79bbcd WatchSource:0}: Error finding container abd6db95a4613bde3b70db0b98e2c80b8f52383de001a71e290a0b5dbb79bbcd: Status 404 returned error can't find the container with id abd6db95a4613bde3b70db0b98e2c80b8f52383de001a71e290a0b5dbb79bbcd Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690114 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690354 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a8221fb7-b435-4a06-8a6d-7bcc4afda383-installation-pull-secrets\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690399 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690428 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82cp\" (UniqueName: \"kubernetes.io/projected/dafca5e1-81a4-4904-98bd-a054d15d3afd-kube-api-access-r82cp\") pod \"multus-admission-controller-69db94689b-k2mn9\" (UID: \"dafca5e1-81a4-4904-98bd-a054d15d3afd\") " pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690467 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/613e1678-fee7-4592-a8e1-36a5454d482c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690491 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5rr\" (UniqueName: \"kubernetes.io/projected/21cff6b7-1a65-4937-9d61-00c599278e4c-kube-api-access-7q5rr\") pod \"migrator-866fcbc849-gtvwd\" (UID: \"21cff6b7-1a65-4937-9d61-00c599278e4c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690512 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/effa37fd-72fe-49ed-99b8-e190b0115c26-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690536 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-key\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690563 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwnhv\" (UniqueName: \"kubernetes.io/projected/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-kube-api-access-vwnhv\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690589 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x28sf\" (UniqueName: \"kubernetes.io/projected/38193cbf-c891-4f9a-910f-2d7333064556-kube-api-access-x28sf\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: E1212 15:23:19.690657 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.190622098 +0000 UTC m=+138.294530749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.690971 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.691026 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effa37fd-72fe-49ed-99b8-e190b0115c26-config\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.691060 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1814fed8-acf1-4395-86f9-24219f084d55-webhook-cert\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.691103 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dafca5e1-81a4-4904-98bd-a054d15d3afd-webhook-certs\") pod \"multus-admission-controller-69db94689b-k2mn9\" (UID: \"dafca5e1-81a4-4904-98bd-a054d15d3afd\") " pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.691134 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-policies\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.691169 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2rs6\" (UniqueName: \"kubernetes.io/projected/174ac316-3890-4143-b377-559d8d137c5c-kube-api-access-v2rs6\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.691198 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effa37fd-72fe-49ed-99b8-e190b0115c26-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.694045 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvl52\" (UniqueName: \"kubernetes.io/projected/de54b467-5c8f-470a-9fe9-c54105ff38e2-kube-api-access-jvl52\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.694089 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/413d84ff-aa8f-43d9-85c8-873b9c7855da-serving-cert\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.694387 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5sp\" (UniqueName: \"kubernetes.io/projected/effa37fd-72fe-49ed-99b8-e190b0115c26-kube-api-access-vf5sp\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.694523 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/afe5f347-a3d1-4f77-afb7-f490ae797422-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.694718 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.695583 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38193cbf-c891-4f9a-910f-2d7333064556-images\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.695639 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.695771 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.695979 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/413d84ff-aa8f-43d9-85c8-873b9c7855da-config\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.696037 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-dir\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.696106 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.696212 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.696261 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc075443-c3a4-468b-a2db-32223eb9093b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.696310 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1814fed8-acf1-4395-86f9-24219f084d55-tmpfs\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.696364 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/32e575ce-a859-4f31-a407-adf15ebb80bd-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kf45z\" (UID: \"32e575ce-a859-4f31-a407-adf15ebb80bd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.696463 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42235e4-1a23-419b-bf5e-1e7f25ee251d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.700752 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khzhk\" (UniqueName: \"kubernetes.io/projected/b22ffdfa-22ac-494c-b7ed-e12a8a159b9a-kube-api-access-khzhk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nvw9n\" (UID: \"b22ffdfa-22ac-494c-b7ed-e12a8a159b9a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.700842 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.700989 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-srv-cert\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.701038 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xmg\" (UniqueName: \"kubernetes.io/projected/bc075443-c3a4-468b-a2db-32223eb9093b-kube-api-access-v9xmg\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.701095 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a8221fb7-b435-4a06-8a6d-7bcc4afda383-ca-trust-extracted\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.701135 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.701317 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613e1678-fee7-4592-a8e1-36a5454d482c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.701767 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afe5f347-a3d1-4f77-afb7-f490ae797422-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.702064 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.702117 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38193cbf-c891-4f9a-910f-2d7333064556-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.702137 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a8221fb7-b435-4a06-8a6d-7bcc4afda383-ca-trust-extracted\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.702407 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe5f347-a3d1-4f77-afb7-f490ae797422-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.703114 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdnp\" (UniqueName: \"kubernetes.io/projected/f241e2a0-3e8f-45a4-805e-729b31ed6add-kube-api-access-kmdnp\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.703177 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.703201 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38193cbf-c891-4f9a-910f-2d7333064556-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.703218 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdr2z\" (UniqueName: \"kubernetes.io/projected/32e575ce-a859-4f31-a407-adf15ebb80bd-kube-api-access-vdr2z\") pod \"package-server-manager-77f986bd66-kf45z\" (UID: \"32e575ce-a859-4f31-a407-adf15ebb80bd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.703282 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241e2a0-3e8f-45a4-805e-729b31ed6add-config-volume\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.708955 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2nkh\" (UniqueName: \"kubernetes.io/projected/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-kube-api-access-c2nkh\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.715485 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqthf\" (UniqueName: \"kubernetes.io/projected/44e6a852-c3c4-48d1-b80b-fb3c0897af7c-kube-api-access-mqthf\") pod \"apiserver-9ddfb9f55-9d2g2\" (UID: \"44e6a852-c3c4-48d1-b80b-fb3c0897af7c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.716797 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241e2a0-3e8f-45a4-805e-729b31ed6add-secret-volume\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.716899 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/413d84ff-aa8f-43d9-85c8-873b9c7855da-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.717004 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbgn\" (UniqueName: \"kubernetes.io/projected/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-kube-api-access-tkbgn\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.717051 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9d9zl\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-kube-api-access-9d9zl\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.718255 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.718425 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c42235e4-1a23-419b-bf5e-1e7f25ee251d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.718623 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-cabundle\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.718712 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613e1678-fee7-4592-a8e1-36a5454d482c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.718790 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-tmpfs\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.718907 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719342 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a8221fb7-b435-4a06-8a6d-7bcc4afda383-installation-pull-secrets\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719414 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-tls\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719489 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719515 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719547 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtv5r\" (UniqueName: \"kubernetes.io/projected/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-kube-api-access-vtv5r\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719356 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsghk\" (UniqueName: \"kubernetes.io/projected/3dde3606-4e50-4c3e-8b0d-baa57e43c41c-kube-api-access-xsghk\") pod \"machine-approver-54c688565-kntm5\" (UID: \"3dde3606-4e50-4c3e-8b0d-baa57e43c41c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719683 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-metrics-certs\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719748 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-trusted-ca\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.719881 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh5jw\" (UniqueName: \"kubernetes.io/projected/c42235e4-1a23-419b-bf5e-1e7f25ee251d-kube-api-access-hh5jw\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.720250 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/613e1678-fee7-4592-a8e1-36a5454d482c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.720363 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-tmp\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.720520 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe5f347-a3d1-4f77-afb7-f490ae797422-config\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.723010 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-stats-auth\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.724231 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b22ffdfa-22ac-494c-b7ed-e12a8a159b9a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nvw9n\" (UID: \"b22ffdfa-22ac-494c-b7ed-e12a8a159b9a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.724393 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1814fed8-acf1-4395-86f9-24219f084d55-apiservice-cert\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.724543 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/174ac316-3890-4143-b377-559d8d137c5c-service-ca-bundle\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.724709 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlws2\" (UniqueName: \"kubernetes.io/projected/1814fed8-acf1-4395-86f9-24219f084d55-kube-api-access-nlws2\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.724850 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.725544 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-bound-sa-token\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.725871 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.725935 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc075443-c3a4-468b-a2db-32223eb9093b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.725980 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/413d84ff-aa8f-43d9-85c8-873b9c7855da-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.726080 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-certificates\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.726126 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-default-certificate\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.726210 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: E1212 15:23:19.727524 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.22750393 +0000 UTC m=+138.331412561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.729165 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-certificates\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.729521 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-trusted-ca\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.730185 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-tls\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: W1212 15:23:19.731166 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-15770eedd166b755ad5170280ecf3041a393936ede557b962579cac5a754e305 WatchSource:0}: Error finding container 15770eedd166b755ad5170280ecf3041a393936ede557b962579cac5a754e305: Status 404 returned error can't find the container with id 15770eedd166b755ad5170280ecf3041a393936ede557b962579cac5a754e305 Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.732864 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.735857 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjb8s\" (UniqueName: \"kubernetes.io/projected/c8619046-af5b-41ea-b50f-ed757800ab47-kube-api-access-fjb8s\") pod \"console-operator-67c89758df-9bfsz\" (UID: \"c8619046-af5b-41ea-b50f-ed757800ab47\") " pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.848806 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.850015 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.850092 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.850567 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6c20a46e-1da7-4233-b89a-5029f754b132-tmp-dir\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.853864 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.854824 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.855448 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.857646 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.857739 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.857870 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.859729 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-metrics-certs\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: E1212 15:23:19.860324 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.360292222 +0000 UTC m=+138.464200863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.860529 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.860985 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.861187 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hh5jw\" (UniqueName: \"kubernetes.io/projected/c42235e4-1a23-419b-bf5e-1e7f25ee251d-kube-api-access-hh5jw\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.861314 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.861404 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-tmpfs\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.861521 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c20a46e-1da7-4233-b89a-5029f754b132-config-volume\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.861525 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.861724 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-mountpoint-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.861732 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862153 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/613e1678-fee7-4592-a8e1-36a5454d482c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862212 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-tmp\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862245 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe5f347-a3d1-4f77-afb7-f490ae797422-config\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862294 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-stats-auth\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862323 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxm7q\" (UniqueName: \"kubernetes.io/projected/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-kube-api-access-mxm7q\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862361 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b22ffdfa-22ac-494c-b7ed-e12a8a159b9a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nvw9n\" (UID: \"b22ffdfa-22ac-494c-b7ed-e12a8a159b9a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862408 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1814fed8-acf1-4395-86f9-24219f084d55-apiservice-cert\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862436 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/174ac316-3890-4143-b377-559d8d137c5c-service-ca-bundle\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862594 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nlws2\" (UniqueName: \"kubernetes.io/projected/1814fed8-acf1-4395-86f9-24219f084d55-kube-api-access-nlws2\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862639 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862687 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862712 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc075443-c3a4-468b-a2db-32223eb9093b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862736 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/413d84ff-aa8f-43d9-85c8-873b9c7855da-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862832 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/613e1678-fee7-4592-a8e1-36a5454d482c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862993 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-tmp\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.863384 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe5f347-a3d1-4f77-afb7-f490ae797422-config\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.863557 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/413d84ff-aa8f-43d9-85c8-873b9c7855da-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.862767 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-default-certificate\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864004 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/092012fd-8c87-44d7-92dc-83036f270c8c-cert\") pod \"ingress-canary-6k74d\" (UID: \"092012fd-8c87-44d7-92dc-83036f270c8c\") " pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864077 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864105 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5xtg\" (UniqueName: \"kubernetes.io/projected/1e63fcef-7121-4025-b914-bb1ee37e8d5a-kube-api-access-q5xtg\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864130 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-node-bootstrap-token\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864161 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8jrq\" (UniqueName: \"kubernetes.io/projected/3adb4515-a2d2-4849-a626-81443f61d9d2-kube-api-access-n8jrq\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864184 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864429 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-registration-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864460 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r82cp\" (UniqueName: \"kubernetes.io/projected/dafca5e1-81a4-4904-98bd-a054d15d3afd-kube-api-access-r82cp\") pod \"multus-admission-controller-69db94689b-k2mn9\" (UID: \"dafca5e1-81a4-4904-98bd-a054d15d3afd\") " pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864481 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/613e1678-fee7-4592-a8e1-36a5454d482c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864503 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7q5rr\" (UniqueName: \"kubernetes.io/projected/21cff6b7-1a65-4937-9d61-00c599278e4c-kube-api-access-7q5rr\") pod \"migrator-866fcbc849-gtvwd\" (UID: \"21cff6b7-1a65-4937-9d61-00c599278e4c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864803 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/effa37fd-72fe-49ed-99b8-e190b0115c26-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.864976 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-key\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.865019 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-srv-cert\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.865065 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwnhv\" (UniqueName: \"kubernetes.io/projected/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-kube-api-access-vwnhv\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.865089 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x28sf\" (UniqueName: \"kubernetes.io/projected/38193cbf-c891-4f9a-910f-2d7333064556-kube-api-access-x28sf\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: E1212 15:23:19.865512 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.365490688 +0000 UTC m=+138.469399329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.865756 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/effa37fd-72fe-49ed-99b8-e190b0115c26-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.865768 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.866149 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effa37fd-72fe-49ed-99b8-e190b0115c26-config\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.866957 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effa37fd-72fe-49ed-99b8-e190b0115c26-config\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.867905 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d76cc\" (UniqueName: \"kubernetes.io/projected/6c20a46e-1da7-4233-b89a-5029f754b132-kube-api-access-d76cc\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.868216 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1814fed8-acf1-4395-86f9-24219f084d55-webhook-cert\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.868370 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dafca5e1-81a4-4904-98bd-a054d15d3afd-webhook-certs\") pod \"multus-admission-controller-69db94689b-k2mn9\" (UID: \"dafca5e1-81a4-4904-98bd-a054d15d3afd\") " pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.868485 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-policies\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.868845 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v2rs6\" (UniqueName: \"kubernetes.io/projected/174ac316-3890-4143-b377-559d8d137c5c-kube-api-access-v2rs6\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.868907 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effa37fd-72fe-49ed-99b8-e190b0115c26-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.868947 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jvl52\" (UniqueName: \"kubernetes.io/projected/de54b467-5c8f-470a-9fe9-c54105ff38e2-kube-api-access-jvl52\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.869174 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/413d84ff-aa8f-43d9-85c8-873b9c7855da-serving-cert\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870244 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5sp\" (UniqueName: \"kubernetes.io/projected/effa37fd-72fe-49ed-99b8-e190b0115c26-kube-api-access-vf5sp\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870284 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/afe5f347-a3d1-4f77-afb7-f490ae797422-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870315 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870470 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crs4f\" (UniqueName: \"kubernetes.io/projected/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-kube-api-access-crs4f\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870531 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38193cbf-c891-4f9a-910f-2d7333064556-images\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870558 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870590 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870619 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/413d84ff-aa8f-43d9-85c8-873b9c7855da-config\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870645 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-dir\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870733 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870767 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870794 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc075443-c3a4-468b-a2db-32223eb9093b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870800 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/afe5f347-a3d1-4f77-afb7-f490ae797422-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870816 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1814fed8-acf1-4395-86f9-24219f084d55-tmpfs\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870872 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd3fd373-c790-4c95-94ee-3fc86809aaf2-serving-cert\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870903 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/32e575ce-a859-4f31-a407-adf15ebb80bd-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kf45z\" (UID: \"32e575ce-a859-4f31-a407-adf15ebb80bd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870932 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42235e4-1a23-419b-bf5e-1e7f25ee251d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.870984 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-khzhk\" (UniqueName: \"kubernetes.io/projected/b22ffdfa-22ac-494c-b7ed-e12a8a159b9a-kube-api-access-khzhk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nvw9n\" (UID: \"b22ffdfa-22ac-494c-b7ed-e12a8a159b9a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871009 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871046 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-srv-cert\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871074 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd3fd373-c790-4c95-94ee-3fc86809aaf2-config\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871104 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v9xmg\" (UniqueName: \"kubernetes.io/projected/bc075443-c3a4-468b-a2db-32223eb9093b-kube-api-access-v9xmg\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871123 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871151 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-socket-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871181 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-plugins-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871204 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871235 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-certs\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871264 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613e1678-fee7-4592-a8e1-36a5454d482c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871299 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsk9h\" (UniqueName: \"kubernetes.io/projected/092012fd-8c87-44d7-92dc-83036f270c8c-kube-api-access-bsk9h\") pod \"ingress-canary-6k74d\" (UID: \"092012fd-8c87-44d7-92dc-83036f270c8c\") " pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871323 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afe5f347-a3d1-4f77-afb7-f490ae797422-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871352 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871374 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-profile-collector-cert\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871397 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38193cbf-c891-4f9a-910f-2d7333064556-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871424 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe5f347-a3d1-4f77-afb7-f490ae797422-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871453 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmdnp\" (UniqueName: \"kubernetes.io/projected/f241e2a0-3e8f-45a4-805e-729b31ed6add-kube-api-access-kmdnp\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871475 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c20a46e-1da7-4233-b89a-5029f754b132-metrics-tls\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871506 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871526 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38193cbf-c891-4f9a-910f-2d7333064556-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871545 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdr2z\" (UniqueName: \"kubernetes.io/projected/32e575ce-a859-4f31-a407-adf15ebb80bd-kube-api-access-vdr2z\") pod \"package-server-manager-77f986bd66-kf45z\" (UID: \"32e575ce-a859-4f31-a407-adf15ebb80bd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871570 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241e2a0-3e8f-45a4-805e-729b31ed6add-config-volume\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871596 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c2nkh\" (UniqueName: \"kubernetes.io/projected/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-kube-api-access-c2nkh\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871621 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-ready\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871700 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241e2a0-3e8f-45a4-805e-729b31ed6add-secret-volume\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871725 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/413d84ff-aa8f-43d9-85c8-873b9c7855da-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871726 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1814fed8-acf1-4395-86f9-24219f084d55-tmpfs\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871764 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkbgn\" (UniqueName: \"kubernetes.io/projected/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-kube-api-access-tkbgn\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871809 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b24n\" (UniqueName: \"kubernetes.io/projected/cd3fd373-c790-4c95-94ee-3fc86809aaf2-kube-api-access-5b24n\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871842 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-csi-data-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871893 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.871906 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-dir\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.873034 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc075443-c3a4-468b-a2db-32223eb9093b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.873564 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effa37fd-72fe-49ed-99b8-e190b0115c26-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.874345 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42235e4-1a23-419b-bf5e-1e7f25ee251d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.884209 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.885006 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c42235e4-1a23-419b-bf5e-1e7f25ee251d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.885133 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-cabundle\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.885170 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613e1678-fee7-4592-a8e1-36a5454d482c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.885218 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-tmpfs\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.885250 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.885596 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.885639 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.886461 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38193cbf-c891-4f9a-910f-2d7333064556-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.888691 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.888791 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vtv5r\" (UniqueName: \"kubernetes.io/projected/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-kube-api-access-vtv5r\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.889385 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.889408 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-tmpfs\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.896134 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613e1678-fee7-4592-a8e1-36a5454d482c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.896454 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.898233 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.905227 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe5f347-a3d1-4f77-afb7-f490ae797422-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.907019 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613e1678-fee7-4592-a8e1-36a5454d482c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.914027 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.920335 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c42235e4-1a23-419b-bf5e-1e7f25ee251d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.931813 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.937554 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-48xth"] Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.937943 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-metrics-certs\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.988124 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.988455 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.994961 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:19 crc kubenswrapper[5099]: E1212 15:23:19.995205 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.49517975 +0000 UTC m=+138.599088391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995276 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995307 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-tmpfs\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995347 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c20a46e-1da7-4233-b89a-5029f754b132-config-volume\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995376 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-mountpoint-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995424 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxm7q\" (UniqueName: \"kubernetes.io/projected/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-kube-api-access-mxm7q\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995492 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/092012fd-8c87-44d7-92dc-83036f270c8c-cert\") pod \"ingress-canary-6k74d\" (UID: \"092012fd-8c87-44d7-92dc-83036f270c8c\") " pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995548 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995573 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5xtg\" (UniqueName: \"kubernetes.io/projected/1e63fcef-7121-4025-b914-bb1ee37e8d5a-kube-api-access-q5xtg\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995597 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-node-bootstrap-token\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995620 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n8jrq\" (UniqueName: \"kubernetes.io/projected/3adb4515-a2d2-4849-a626-81443f61d9d2-kube-api-access-n8jrq\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995771 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-registration-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995843 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-srv-cert\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995903 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d76cc\" (UniqueName: \"kubernetes.io/projected/6c20a46e-1da7-4233-b89a-5029f754b132-kube-api-access-d76cc\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.995989 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-crs4f\" (UniqueName: \"kubernetes.io/projected/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-kube-api-access-crs4f\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996068 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd3fd373-c790-4c95-94ee-3fc86809aaf2-serving-cert\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996102 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd3fd373-c790-4c95-94ee-3fc86809aaf2-config\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996126 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996144 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-socket-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996170 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-plugins-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996219 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-certs\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996250 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bsk9h\" (UniqueName: \"kubernetes.io/projected/092012fd-8c87-44d7-92dc-83036f270c8c-kube-api-access-bsk9h\") pod \"ingress-canary-6k74d\" (UID: \"092012fd-8c87-44d7-92dc-83036f270c8c\") " pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996275 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-profile-collector-cert\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996310 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c20a46e-1da7-4233-b89a-5029f754b132-metrics-tls\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996339 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-ready\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996373 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5b24n\" (UniqueName: \"kubernetes.io/projected/cd3fd373-c790-4c95-94ee-3fc86809aaf2-kube-api-access-5b24n\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996398 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-csi-data-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996445 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6c20a46e-1da7-4233-b89a-5029f754b132-tmp-dir\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:19 crc kubenswrapper[5099]: I1212 15:23:19.996631 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-mountpoint-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:19 crc kubenswrapper[5099]: E1212 15:23:19.997011 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.497003067 +0000 UTC m=+138.600911708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:19.999516 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-default-certificate\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:19.999606 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-registration-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:19.999863 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.000199 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-socket-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.000333 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-csi-data-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.000868 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-ready\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.000968 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3adb4515-a2d2-4849-a626-81443f61d9d2-plugins-dir\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.001173 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6c20a46e-1da7-4233-b89a-5029f754b132-tmp-dir\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.001496 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.012087 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-tmpfs\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.017008 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/174ac316-3890-4143-b377-559d8d137c5c-stats-auth\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.017428 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.023088 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6ldwt"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.024203 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/174ac316-3890-4143-b377-559d8d137c5c-service-ca-bundle\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.032915 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: W1212 15:23:20.103199 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac07f35f_c1de_4d9f_9f8e_2eb135e271ae.slice/crio-4c2fb33569f43c57b1c3d0221d356ae63a8a04f8f2aba09c8533f6c5c5659e11 WatchSource:0}: Error finding container 4c2fb33569f43c57b1c3d0221d356ae63a8a04f8f2aba09c8533f6c5c5659e11: Status 404 returned error can't find the container with id 4c2fb33569f43c57b1c3d0221d356ae63a8a04f8f2aba09c8533f6c5c5659e11 Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.103600 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.104811 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.104964 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.105159 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.106900 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.606871663 +0000 UTC m=+138.710780314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.109618 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.110036 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.610018675 +0000 UTC m=+138.713927316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.112521 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.119389 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.124886 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/413d84ff-aa8f-43d9-85c8-873b9c7855da-config\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.131413 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.150982 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.171443 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.171518 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.179676 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jf6f9"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.185969 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/413d84ff-aa8f-43d9-85c8-873b9c7855da-serving-cert\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.194640 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.205281 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.210520 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.210761 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.71073185 +0000 UTC m=+138.814640491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.211212 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.211258 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.211721 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.711659184 +0000 UTC m=+138.815567825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.224398 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc075443-c3a4-468b-a2db-32223eb9093b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.249440 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kncv\" (UniqueName: \"kubernetes.io/projected/5880f9c7-9f93-49ac-9a5f-1fbe457edb1b-kube-api-access-2kncv\") pod \"dns-operator-799b87ffcd-6g4jq\" (UID: \"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.290464 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.294721 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.308999 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8hxv\" (UniqueName: \"kubernetes.io/projected/900d9c74-c186-4e65-8d3d-5f282ced8617-kube-api-access-v8hxv\") pod \"apiserver-8596bd845d-4lz9z\" (UID: \"900d9c74-c186-4e65-8d3d-5f282ced8617\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.311530 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.312647 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.315023 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxwx\" (UniqueName: \"kubernetes.io/projected/aa68d84c-a712-4979-afe1-bdb4f8329372-kube-api-access-6jxwx\") pod \"console-64d44f6ddf-xpdmq\" (UID: \"aa68d84c-a712-4979-afe1-bdb4f8329372\") " pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.319120 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.819086975 +0000 UTC m=+138.922995616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.324089 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.354755 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.367616 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.368490 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.428659 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.428866 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.428924 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.429052 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.429468 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:20.929449923 +0000 UTC m=+139.033358554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.435565 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.436961 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.473118 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.475806 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.478459 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.480067 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.480283 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.481261 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.482062 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.482230 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.506654 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" event={"ID":"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae","Type":"ContainerStarted","Data":"4c2fb33569f43c57b1c3d0221d356ae63a8a04f8f2aba09c8533f6c5c5659e11"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.509805 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" event={"ID":"3c6ceb95-e500-4c75-b79f-135276dd6854","Type":"ContainerStarted","Data":"2a3f725f2572fd9f46deaf413b019ad145fba5d4ebb4e3523309c7b6d982e4b6"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.519634 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"e9f20cf7abc009a1e73f42332a9e4ecb896f296001136bdce63c1ebc9ef58d8f"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.519736 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"abd6db95a4613bde3b70db0b98e2c80b8f52383de001a71e290a0b5dbb79bbcd"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.522056 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tpqns" event={"ID":"be3e8066-7769-4174-b1af-e18146cd80c0","Type":"ContainerStarted","Data":"d36d9ba0da729e5b0cbe297cbe99e4e50c9832b549c2af8aee3d7aeb54463924"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.523254 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"15770eedd166b755ad5170280ecf3041a393936ede557b962579cac5a754e305"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.524245 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" event={"ID":"3dde3606-4e50-4c3e-8b0d-baa57e43c41c","Type":"ContainerStarted","Data":"6d9bd90f51a77828f3e860afa69c3b38abefe77a6f4b1d975f90237998e46a60"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.525000 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" event={"ID":"2d338ea8-c48b-425e-8839-7d9d05af01d7","Type":"ContainerStarted","Data":"a7f0ad4b29307fdb74f5544e850101fb31db2c2c98c5bc2007003fb9a9220dc7"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.525550 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" event={"ID":"1c0b0461-cdc7-4453-9766-eb3dc5385423","Type":"ContainerStarted","Data":"fb5df76e839bca6872314ca26ea198055979be24d75c22413a9f9f3766c6c3b6"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.526412 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" event={"ID":"91b03c8a-85d9-4774-bdfe-87d41eace7ca","Type":"ContainerStarted","Data":"cfd0e051084e02bab6a5f074dbffe17eb5b83dd168115bed3491ee6a2be96caf"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.527933 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"a6244ae93ec857064fee19312dade06283d354705d701547fa5469ade6781ebb"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.527967 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"607e453c133ebbf792b6bc8515c8632ed17a8da727d6abcacaa714a0cd036994"} Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.528880 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.560098 5099 request.go:752] "Waited before sending request" delay="1.071960712s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&limit=500&resourceVersion=0" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.560377 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.560876 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.560912 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.561194 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.061171248 +0000 UTC m=+139.165079889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.561425 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.561807 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.061795214 +0000 UTC m=+139.165703855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.565258 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.565559 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.565993 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.572813 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-policies\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.574273 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.574431 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.578609 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.580793 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.596017 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.606710 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38193cbf-c891-4f9a-910f-2d7333064556-images\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.619418 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.632074 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.641047 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38193cbf-c891-4f9a-910f-2d7333064556-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.656749 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.665469 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.665706 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-9d2g2"] Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.666045 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.166022743 +0000 UTC m=+139.269931384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.666185 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241e2a0-3e8f-45a4-805e-729b31ed6add-config-volume\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.676467 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.697021 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.699181 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.707342 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.710924 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.710999 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-profile-collector-cert\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.722111 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-87m2r"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.723370 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241e2a0-3e8f-45a4-805e-729b31ed6add-secret-volume\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.726137 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.734330 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.752462 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.757457 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-srv-cert\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.774631 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.777772 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.778224 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.278205508 +0000 UTC m=+139.382114149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.796556 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 15:23:20 crc kubenswrapper[5099]: W1212 15:23:20.806580 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7726b6bd_36f7_4478_91ad_1fa75a6da808.slice/crio-5855d004fecf0d233e786c8702e72c00b5e5f4ccd1c235326a4a38e5f43b7863 WatchSource:0}: Error finding container 5855d004fecf0d233e786c8702e72c00b5e5f4ccd1c235326a4a38e5f43b7863: Status 404 returned error can't find the container with id 5855d004fecf0d233e786c8702e72c00b5e5f4ccd1c235326a4a38e5f43b7863 Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.810517 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.813030 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-9bfsz"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.815280 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng"] Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.820384 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b22ffdfa-22ac-494c-b7ed-e12a8a159b9a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nvw9n\" (UID: \"b22ffdfa-22ac-494c-b7ed-e12a8a159b9a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" Dec 12 15:23:20 crc kubenswrapper[5099]: W1212 15:23:20.830417 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fd8dbd3_0ba2_43fb_8364_7d98167be1b8.slice/crio-88ef4e2daeb5a93832e2bfeaab86a4b254393ec17259fee0cfd9be4e182d4c14 WatchSource:0}: Error finding container 88ef4e2daeb5a93832e2bfeaab86a4b254393ec17259fee0cfd9be4e182d4c14: Status 404 returned error can't find the container with id 88ef4e2daeb5a93832e2bfeaab86a4b254393ec17259fee0cfd9be4e182d4c14 Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.831338 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.841276 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1814fed8-acf1-4395-86f9-24219f084d55-webhook-cert\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.884904 5099 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885035 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics podName:dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.385008453 +0000 UTC m=+139.488917094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-fmnlp" (UID: "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885418 5099 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885463 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-cabundle podName:de54b467-5c8f-470a-9fe9-c54105ff38e2 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.385453645 +0000 UTC m=+139.489362286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-cabundle") pod "service-ca-74545575db-lgbxs" (UID: "de54b467-5c8f-470a-9fe9-c54105ff38e2") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.885506 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885737 5099 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885736 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.385720382 +0000 UTC m=+139.489629023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885774 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dafca5e1-81a4-4904-98bd-a054d15d3afd-webhook-certs podName:dafca5e1-81a4-4904-98bd-a054d15d3afd nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.385761843 +0000 UTC m=+139.489670484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dafca5e1-81a4-4904-98bd-a054d15d3afd-webhook-certs") pod "multus-admission-controller-69db94689b-k2mn9" (UID: "dafca5e1-81a4-4904-98bd-a054d15d3afd") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885796 5099 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885818 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca podName:dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.385812684 +0000 UTC m=+139.489721325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-fmnlp" (UID: "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885832 5099 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885856 5099 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885857 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32e575ce-a859-4f31-a407-adf15ebb80bd-package-server-manager-serving-cert podName:32e575ce-a859-4f31-a407-adf15ebb80bd nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.385848595 +0000 UTC m=+139.489757246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/32e575ce-a859-4f31-a407-adf15ebb80bd-package-server-manager-serving-cert") pod "package-server-manager-77f986bd66-kf45z" (UID: "32e575ce-a859-4f31-a407-adf15ebb80bd") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.885878 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-key podName:de54b467-5c8f-470a-9fe9-c54105ff38e2 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.385872076 +0000 UTC m=+139.489780707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-key") pod "service-ca-74545575db-lgbxs" (UID: "de54b467-5c8f-470a-9fe9-c54105ff38e2") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.885960 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.886292 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.386284346 +0000 UTC m=+139.490192987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: W1212 15:23:20.890625 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8810bcc_64c4_4f66_aa84_7f195007922e.slice/crio-941fd543d7dbdb04e6b9c3a8f1854e2eb0711c54a1ad4c3673fc615f21f3b413 WatchSource:0}: Error finding container 941fd543d7dbdb04e6b9c3a8f1854e2eb0711c54a1ad4c3673fc615f21f3b413: Status 404 returned error can't find the container with id 941fd543d7dbdb04e6b9c3a8f1854e2eb0711c54a1ad4c3673fc615f21f3b413 Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.894373 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.895774 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1814fed8-acf1-4395-86f9-24219f084d55-apiservice-cert\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.896231 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.896460 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.911080 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.934960 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.959469 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.976044 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.987647 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.988254 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.488221465 +0000 UTC m=+139.592130106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:20 crc kubenswrapper[5099]: I1212 15:23:20.996150 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.997478 5099 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.997653 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/092012fd-8c87-44d7-92dc-83036f270c8c-cert podName:092012fd-8c87-44d7-92dc-83036f270c8c nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.49763351 +0000 UTC m=+139.601542141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/092012fd-8c87-44d7-92dc-83036f270c8c-cert") pod "ingress-canary-6k74d" (UID: "092012fd-8c87-44d7-92dc-83036f270c8c") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.998419 5099 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.998544 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c20a46e-1da7-4233-b89a-5029f754b132-config-volume podName:6c20a46e-1da7-4233-b89a-5029f754b132 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.498534603 +0000 UTC m=+139.602443244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6c20a46e-1da7-4233-b89a-5029f754b132-config-volume") pod "dns-default-tkw9f" (UID: "6c20a46e-1da7-4233-b89a-5029f754b132") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.998858 5099 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:20 crc kubenswrapper[5099]: E1212 15:23:20.999012 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-node-bootstrap-token podName:1e63fcef-7121-4025-b914-bb1ee37e8d5a nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.498998266 +0000 UTC m=+139.602906907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-node-bootstrap-token") pod "machine-config-server-jhxlt" (UID: "1e63fcef-7121-4025-b914-bb1ee37e8d5a") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.000057 5099 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.000113 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-srv-cert podName:be1d4c1f-99d8-40f2-b6d5-d6b7aec07317 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.500100694 +0000 UTC m=+139.604009335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-srv-cert") pod "olm-operator-5cdf44d969-wn5sv" (UID: "be1d4c1f-99d8-40f2-b6d5-d6b7aec07317") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.000163 5099 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.000221 5099 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.000244 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist podName:9fbc0f6e-a03e-414c-8f95-4bc036fac71b nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.500235338 +0000 UTC m=+139.604143979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-5wjhf" (UID: "9fbc0f6e-a03e-414c-8f95-4bc036fac71b") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.000313 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd3fd373-c790-4c95-94ee-3fc86809aaf2-config podName:cd3fd373-c790-4c95-94ee-3fc86809aaf2 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.500289839 +0000 UTC m=+139.604198480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cd3fd373-c790-4c95-94ee-3fc86809aaf2-config") pod "service-ca-operator-5b9c976747-ptfxw" (UID: "cd3fd373-c790-4c95-94ee-3fc86809aaf2") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.000447 5099 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.000485 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd3fd373-c790-4c95-94ee-3fc86809aaf2-serving-cert podName:cd3fd373-c790-4c95-94ee-3fc86809aaf2 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.500474744 +0000 UTC m=+139.604383385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cd3fd373-c790-4c95-94ee-3fc86809aaf2-serving-cert") pod "service-ca-operator-5b9c976747-ptfxw" (UID: "cd3fd373-c790-4c95-94ee-3fc86809aaf2") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.001344 5099 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.001423 5099 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.001463 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-certs podName:1e63fcef-7121-4025-b914-bb1ee37e8d5a nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.50145484 +0000 UTC m=+139.605363481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-certs") pod "machine-config-server-jhxlt" (UID: "1e63fcef-7121-4025-b914-bb1ee37e8d5a") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.001494 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c20a46e-1da7-4233-b89a-5029f754b132-metrics-tls podName:6c20a46e-1da7-4233-b89a-5029f754b132 nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.501489051 +0000 UTC m=+139.605397692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6c20a46e-1da7-4233-b89a-5029f754b132-metrics-tls") pod "dns-default-tkw9f" (UID: "6c20a46e-1da7-4233-b89a-5029f754b132") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.011733 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.053077 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.053160 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.068485 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-xpdmq"] Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.073514 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.075421 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-6g4jq"] Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.082275 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z"] Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.089688 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.090108 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.590090061 +0000 UTC m=+139.693998702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.091514 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.112627 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 15:23:21 crc kubenswrapper[5099]: W1212 15:23:21.113098 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa68d84c_a712_4979_afe1_bdb4f8329372.slice/crio-02c6820af882fae7cf5037ec12464d64ab4357e1098b34b72ff9f08b9713fb10 WatchSource:0}: Error finding container 02c6820af882fae7cf5037ec12464d64ab4357e1098b34b72ff9f08b9713fb10: Status 404 returned error can't find the container with id 02c6820af882fae7cf5037ec12464d64ab4357e1098b34b72ff9f08b9713fb10 Dec 12 15:23:21 crc kubenswrapper[5099]: W1212 15:23:21.120284 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5880f9c7_9f93_49ac_9a5f_1fbe457edb1b.slice/crio-11362d348fb7a9e04a39f4a2eb0578e62e33ece5e7dde5305b2067dbc6fb5d57 WatchSource:0}: Error finding container 11362d348fb7a9e04a39f4a2eb0578e62e33ece5e7dde5305b2067dbc6fb5d57: Status 404 returned error can't find the container with id 11362d348fb7a9e04a39f4a2eb0578e62e33ece5e7dde5305b2067dbc6fb5d57 Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.131053 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.151888 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.171082 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.193843 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.194399 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.69437316 +0000 UTC m=+139.798281801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.194637 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.195083 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.695075559 +0000 UTC m=+139.798984200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.195500 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.210416 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.230547 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.293263 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.293438 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.293261 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.296052 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.296837 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.796817702 +0000 UTC m=+139.900726333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.311371 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.331814 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.380300 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.381154 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.391088 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.397588 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.397646 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-key\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.397693 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.397718 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dafca5e1-81a4-4904-98bd-a054d15d3afd-webhook-certs\") pod \"multus-admission-controller-69db94689b-k2mn9\" (UID: \"dafca5e1-81a4-4904-98bd-a054d15d3afd\") " pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.397762 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.397793 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/32e575ce-a859-4f31-a407-adf15ebb80bd-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kf45z\" (UID: \"32e575ce-a859-4f31-a407-adf15ebb80bd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.397957 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-cabundle\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.398152 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:21.898134264 +0000 UTC m=+140.002042905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.399151 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-cabundle\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.406640 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.407088 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/32e575ce-a859-4f31-a407-adf15ebb80bd-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kf45z\" (UID: \"32e575ce-a859-4f31-a407-adf15ebb80bd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.406782 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.409242 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/de54b467-5c8f-470a-9fe9-c54105ff38e2-signing-key\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.410887 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dafca5e1-81a4-4904-98bd-a054d15d3afd-webhook-certs\") pod \"multus-admission-controller-69db94689b-k2mn9\" (UID: \"dafca5e1-81a4-4904-98bd-a054d15d3afd\") " pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.411476 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.483768 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.675598 5099 request.go:752] "Waited before sending request" delay="1.809725975s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.688365 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.688410 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.688760 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.706167 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.707953 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd3fd373-c790-4c95-94ee-3fc86809aaf2-config\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.708011 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.708065 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-certs\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.708141 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c20a46e-1da7-4233-b89a-5029f754b132-metrics-tls\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.708264 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c20a46e-1da7-4233-b89a-5029f754b132-config-volume\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.708346 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/092012fd-8c87-44d7-92dc-83036f270c8c-cert\") pod \"ingress-canary-6k74d\" (UID: \"092012fd-8c87-44d7-92dc-83036f270c8c\") " pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.708386 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-node-bootstrap-token\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.708442 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-srv-cert\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.708561 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd3fd373-c790-4c95-94ee-3fc86809aaf2-serving-cert\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.715740 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlws2\" (UniqueName: \"kubernetes.io/projected/1814fed8-acf1-4395-86f9-24219f084d55-kube-api-access-nlws2\") pod \"packageserver-7d4fc7d867-zkw7z\" (UID: \"1814fed8-acf1-4395-86f9-24219f084d55\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.716806 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd3fd373-c790-4c95-94ee-3fc86809aaf2-config\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.717008 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:22.216977663 +0000 UTC m=+140.320886304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.901498 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd3fd373-c790-4c95-94ee-3fc86809aaf2-serving-cert\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.903357 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.910710 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c20a46e-1da7-4233-b89a-5029f754b132-config-volume\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.920777 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:21 crc kubenswrapper[5099]: E1212 15:23:21.921386 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:22.421373509 +0000 UTC m=+140.525282150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.923768 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-bound-sa-token\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.923843 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q5rr\" (UniqueName: \"kubernetes.io/projected/21cff6b7-1a65-4937-9d61-00c599278e4c-kube-api-access-7q5rr\") pod \"migrator-866fcbc849-gtvwd\" (UID: \"21cff6b7-1a65-4937-9d61-00c599278e4c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.925203 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-certs\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.926214 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" Dec 12 15:23:21 crc kubenswrapper[5099]: I1212 15:23:21.932915 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c20a46e-1da7-4233-b89a-5029f754b132-metrics-tls\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.102950 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.106182 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:22.606156783 +0000 UTC m=+140.710065424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.107164 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmdnp\" (UniqueName: \"kubernetes.io/projected/f241e2a0-3e8f-45a4-805e-729b31ed6add-kube-api-access-kmdnp\") pod \"collect-profiles-29425875-6fljb\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.116062 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9xmg\" (UniqueName: \"kubernetes.io/projected/bc075443-c3a4-468b-a2db-32223eb9093b-kube-api-access-v9xmg\") pod \"machine-config-controller-f9cdd68f7-dwwwc\" (UID: \"bc075443-c3a4-468b-a2db-32223eb9093b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.117014 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.118786 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2rs6\" (UniqueName: \"kubernetes.io/projected/174ac316-3890-4143-b377-559d8d137c5c-kube-api-access-v2rs6\") pod \"router-default-68cf44c8b8-bgl7b\" (UID: \"174ac316-3890-4143-b377-559d8d137c5c\") " pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.121907 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxm7q\" (UniqueName: \"kubernetes.io/projected/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-kube-api-access-mxm7q\") pod \"cni-sysctl-allowlist-ds-5wjhf\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.123484 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.127397 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtv5r\" (UniqueName: \"kubernetes.io/projected/18b0c64a-ef3e-458e-a852-c3bac6e0f3a6-kube-api-access-vtv5r\") pod \"ingress-operator-6b9cb4dbcf-bjx2r\" (UID: \"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.127975 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82cp\" (UniqueName: \"kubernetes.io/projected/dafca5e1-81a4-4904-98bd-a054d15d3afd-kube-api-access-r82cp\") pod \"multus-admission-controller-69db94689b-k2mn9\" (UID: \"dafca5e1-81a4-4904-98bd-a054d15d3afd\") " pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.128805 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-srv-cert\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.129306 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvl52\" (UniqueName: \"kubernetes.io/projected/de54b467-5c8f-470a-9fe9-c54105ff38e2-kube-api-access-jvl52\") pod \"service-ca-74545575db-lgbxs\" (UID: \"de54b467-5c8f-470a-9fe9-c54105ff38e2\") " pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.131979 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-khzhk\" (UniqueName: \"kubernetes.io/projected/b22ffdfa-22ac-494c-b7ed-e12a8a159b9a-kube-api-access-khzhk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nvw9n\" (UID: \"b22ffdfa-22ac-494c-b7ed-e12a8a159b9a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.132178 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x28sf\" (UniqueName: \"kubernetes.io/projected/38193cbf-c891-4f9a-910f-2d7333064556-kube-api-access-x28sf\") pod \"machine-config-operator-67c9d58cbb-dcr5l\" (UID: \"38193cbf-c891-4f9a-910f-2d7333064556\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.132209 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8jrq\" (UniqueName: \"kubernetes.io/projected/3adb4515-a2d2-4849-a626-81443f61d9d2-kube-api-access-n8jrq\") pod \"csi-hostpathplugin-7h686\" (UID: \"3adb4515-a2d2-4849-a626-81443f61d9d2\") " pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.132836 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2nkh\" (UniqueName: \"kubernetes.io/projected/6555fce5-2fef-4e2f-b724-ef8eb4103b7a-kube-api-access-c2nkh\") pod \"catalog-operator-75ff9f647d-dmmt7\" (UID: \"6555fce5-2fef-4e2f-b724-ef8eb4103b7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.133601 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d9zl\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-kube-api-access-9d9zl\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.134484 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afe5f347-a3d1-4f77-afb7-f490ae797422-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6756k\" (UID: \"afe5f347-a3d1-4f77-afb7-f490ae797422\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.134524 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5sp\" (UniqueName: \"kubernetes.io/projected/effa37fd-72fe-49ed-99b8-e190b0115c26-kube-api-access-vf5sp\") pod \"openshift-controller-manager-operator-686468bdd5-c5k7s\" (UID: \"effa37fd-72fe-49ed-99b8-e190b0115c26\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.135022 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5xtg\" (UniqueName: \"kubernetes.io/projected/1e63fcef-7121-4025-b914-bb1ee37e8d5a-kube-api-access-q5xtg\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.135220 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/413d84ff-aa8f-43d9-85c8-873b9c7855da-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xmdmg\" (UID: \"413d84ff-aa8f-43d9-85c8-873b9c7855da\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.136027 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdr2z\" (UniqueName: \"kubernetes.io/projected/32e575ce-a859-4f31-a407-adf15ebb80bd-kube-api-access-vdr2z\") pod \"package-server-manager-77f986bd66-kf45z\" (UID: \"32e575ce-a859-4f31-a407-adf15ebb80bd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.136559 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1e63fcef-7121-4025-b914-bb1ee37e8d5a-node-bootstrap-token\") pod \"machine-config-server-jhxlt\" (UID: \"1e63fcef-7121-4025-b914-bb1ee37e8d5a\") " pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.136779 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkbgn\" (UniqueName: \"kubernetes.io/projected/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-kube-api-access-tkbgn\") pod \"marketplace-operator-547dbd544d-fmnlp\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.145702 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwnhv\" (UniqueName: \"kubernetes.io/projected/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-kube-api-access-vwnhv\") pod \"oauth-openshift-66458b6674-gmbdh\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.146030 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.147384 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d76cc\" (UniqueName: \"kubernetes.io/projected/6c20a46e-1da7-4233-b89a-5029f754b132-kube-api-access-d76cc\") pod \"dns-default-tkw9f\" (UID: \"6c20a46e-1da7-4233-b89a-5029f754b132\") " pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.147880 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b24n\" (UniqueName: \"kubernetes.io/projected/cd3fd373-c790-4c95-94ee-3fc86809aaf2-kube-api-access-5b24n\") pod \"service-ca-operator-5b9c976747-ptfxw\" (UID: \"cd3fd373-c790-4c95-94ee-3fc86809aaf2\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.147873 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" event={"ID":"7726b6bd-36f7-4478-91ad-1fa75a6da808","Type":"ContainerStarted","Data":"5855d004fecf0d233e786c8702e72c00b5e5f4ccd1c235326a4a38e5f43b7863"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.148145 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/613e1678-fee7-4592-a8e1-36a5454d482c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-pwlkx\" (UID: \"613e1678-fee7-4592-a8e1-36a5454d482c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.148819 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh5jw\" (UniqueName: \"kubernetes.io/projected/c42235e4-1a23-419b-bf5e-1e7f25ee251d-kube-api-access-hh5jw\") pod \"kube-storage-version-migrator-operator-565b79b866-z2k9v\" (UID: \"c42235e4-1a23-419b-bf5e-1e7f25ee251d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.149984 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsk9h\" (UniqueName: \"kubernetes.io/projected/092012fd-8c87-44d7-92dc-83036f270c8c-kube-api-access-bsk9h\") pod \"ingress-canary-6k74d\" (UID: \"092012fd-8c87-44d7-92dc-83036f270c8c\") " pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.155716 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" event={"ID":"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8","Type":"ContainerStarted","Data":"88ef4e2daeb5a93832e2bfeaab86a4b254393ec17259fee0cfd9be4e182d4c14"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.156378 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-crs4f\" (UniqueName: \"kubernetes.io/projected/be1d4c1f-99d8-40f2-b6d5-d6b7aec07317-kube-api-access-crs4f\") pod \"olm-operator-5cdf44d969-wn5sv\" (UID: \"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.190863 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/092012fd-8c87-44d7-92dc-83036f270c8c-cert\") pod \"ingress-canary-6k74d\" (UID: \"092012fd-8c87-44d7-92dc-83036f270c8c\") " pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.206580 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.238819 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.242527 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:22.742507142 +0000 UTC m=+140.846415783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.248781 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.254896 5099 generic.go:358] "Generic (PLEG): container finished" podID="91b03c8a-85d9-4774-bdfe-87d41eace7ca" containerID="2aaa6123f9ca8d8a50edd403d631dc70ee47db152b808a2f88918da802a0df53" exitCode=0 Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.255103 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" event={"ID":"91b03c8a-85d9-4774-bdfe-87d41eace7ca","Type":"ContainerDied","Data":"2aaa6123f9ca8d8a50edd403d631dc70ee47db152b808a2f88918da802a0df53"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.260609 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-87m2r" event={"ID":"42d147e5-750e-4c46-bb7a-e99a34fca2f9","Type":"ContainerStarted","Data":"52d7db45034b9f43a709e82117c8a14e3ec8da85147533bbbb10eb06b8f9d870"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.267083 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.282881 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" event={"ID":"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b","Type":"ContainerStarted","Data":"11362d348fb7a9e04a39f4a2eb0578e62e33ece5e7dde5305b2067dbc6fb5d57"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.283171 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.287221 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"562ad89729f039c3acfa2ac2821bb7f41fd4353f7bb9ce47bfe3bf9db5a26ddc"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.289755 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-9bfsz" event={"ID":"c8619046-af5b-41ea-b50f-ed757800ab47","Type":"ContainerStarted","Data":"b14bc9647862667cd8650f3c68d02cdc8ddcca34bfa21dac4a9c267f0d8f0f8f"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.299316 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" event={"ID":"c8810bcc-64c4-4f66-aa84-7f195007922e","Type":"ContainerStarted","Data":"941fd543d7dbdb04e6b9c3a8f1854e2eb0711c54a1ad4c3673fc615f21f3b413"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.308245 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.311250 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.312179 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:22.812147449 +0000 UTC m=+140.916056090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.319511 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" event={"ID":"2d338ea8-c48b-425e-8839-7d9d05af01d7","Type":"ContainerStarted","Data":"f966898f02786df1cabbe0ab65760a509dbc520f9c6c3be8faa975a507c09758"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.332016 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.342368 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.362771 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" event={"ID":"1c0b0461-cdc7-4453-9766-eb3dc5385423","Type":"ContainerStarted","Data":"91c9268029a6bdf10aa982e39b6977f79804db02bd518e8f3a97420bdea09a9d"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.363090 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.366382 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" event={"ID":"c43456a5-e138-46ad-bbc2-1b7c25526806","Type":"ContainerStarted","Data":"e5d6e6a70ad6d6d83218a564d02c32153c366963353483959c0a26426869d3bf"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.370793 5099 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-22f86 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.370839 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" podUID="1c0b0461-cdc7-4453-9766-eb3dc5385423" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.372879 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" event={"ID":"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae","Type":"ContainerStarted","Data":"d72c88520bcf5ca60912d771527488a0fbbc6bc282947bc0fec93bdd220e95c2"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.373511 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.381736 5099 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-48xth container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.381813 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" event={"ID":"3c6ceb95-e500-4c75-b79f-135276dd6854","Type":"ContainerStarted","Data":"8f05d74569411b990b7f324b8b2fce8d6071a0478a6906567eb6c3cffde3cbe0"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.381823 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.384998 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tpqns" event={"ID":"be3e8066-7769-4174-b1af-e18146cd80c0","Type":"ContainerStarted","Data":"4046817da712e0c6dc47d0bf299912505e85e8d75e45ac8684c33366a8a1d4e8"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.385193 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.390375 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" event={"ID":"900d9c74-c186-4e65-8d3d-5f282ced8617","Type":"ContainerStarted","Data":"62de239f774f16434eccb235840031f76e91b9112d2c2bbaa668b1a0b84cc338"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.395122 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-xpdmq" event={"ID":"aa68d84c-a712-4979-afe1-bdb4f8329372","Type":"ContainerStarted","Data":"02c6820af882fae7cf5037ec12464d64ab4357e1098b34b72ff9f08b9713fb10"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.401213 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.410582 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.410874 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:22.910860173 +0000 UTC m=+141.014768814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.432627 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" event={"ID":"44e6a852-c3c4-48d1-b80b-fb3c0897af7c","Type":"ContainerStarted","Data":"10ee2fa86f36ff3f6ef5a46713dbd2bb0992cda96bd035a40d4377c256e4c31d"} Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.433718 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.439724 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.445296 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.546778 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.546933 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.046911711 +0000 UTC m=+141.150820352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.547243 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.548885 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.048871092 +0000 UTC m=+141.152779733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.567085 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.572141 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-lgbxs" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.729937 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.753504 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.753690 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.253652672 +0000 UTC m=+141.357561313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.753839 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.754719 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.254711179 +0000 UTC m=+141.358619820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.857527 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.859143 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.359112202 +0000 UTC m=+141.463020843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.956710 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd"] Dec 12 15:23:22 crc kubenswrapper[5099]: I1212 15:23:22.964496 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:22 crc kubenswrapper[5099]: E1212 15:23:22.964887 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.4648746 +0000 UTC m=+141.568783241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.065232 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:23 crc kubenswrapper[5099]: E1212 15:23:23.065633 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.565611827 +0000 UTC m=+141.669520468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.166529 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:23 crc kubenswrapper[5099]: E1212 15:23:23.167129 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.667094853 +0000 UTC m=+141.771003494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.478361 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:23 crc kubenswrapper[5099]: E1212 15:23:23.478630 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:23.978603367 +0000 UTC m=+142.082512008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.582821 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:23 crc kubenswrapper[5099]: E1212 15:23:23.583194 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.083182294 +0000 UTC m=+142.187090935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.595377 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" event={"ID":"21cff6b7-1a65-4937-9d61-00c599278e4c","Type":"ContainerStarted","Data":"a3120ab95c0a12208c1fa280181af924097385d172479b5071db4ddca30a2c7f"} Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.614818 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-87m2r" event={"ID":"42d147e5-750e-4c46-bb7a-e99a34fca2f9","Type":"ContainerStarted","Data":"5cd9a0aab942a07b222395a3c34cd201537845f7a0fd7f5c8d7aa1a3f657f4b0"} Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.626980 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.801553 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:23 crc kubenswrapper[5099]: E1212 15:23:23.801845 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.301827104 +0000 UTC m=+142.405735745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.801927 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:23 crc kubenswrapper[5099]: E1212 15:23:23.802159 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.302151993 +0000 UTC m=+142.406060634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.803347 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" event={"ID":"174ac316-3890-4143-b377-559d8d137c5c","Type":"ContainerStarted","Data":"ce72c01b60abc053d280233fbccc275d41b2bc35b9901ae4a21d3cf753705e36"} Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.891651 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:23 crc kubenswrapper[5099]: I1212 15:23:23.891756 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:23.907067 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:23.907567 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.407531661 +0000 UTC m=+142.511440302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.039389 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.050742 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.550723675 +0000 UTC m=+142.654632316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.161598 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.161979 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.661960395 +0000 UTC m=+142.765869036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.229066 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" event={"ID":"3dde3606-4e50-4c3e-8b0d-baa57e43c41c","Type":"ContainerStarted","Data":"951466585d162ae4be5fbb38738e3c1115a1d50d022ff25a302954ff91318318"} Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.229942 5099 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-48xth container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.230004 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.231257 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb"] Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.276729 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.277333 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.777315254 +0000 UTC m=+142.881223895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.277771 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k"] Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.379162 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.380901 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.880872224 +0000 UTC m=+142.984780895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.381381 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.381803 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.881791738 +0000 UTC m=+142.985700439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.483700 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.484006 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:24.983986993 +0000 UTC m=+143.087895634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: W1212 15:23:24.575240 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafe5f347_a3d1_4f77_afb7_f490ae797422.slice/crio-2bc8b849b122a7730fa01a1b41be00484fc37b45ca3eade13c65c0075266d683 WatchSource:0}: Error finding container 2bc8b849b122a7730fa01a1b41be00484fc37b45ca3eade13c65c0075266d683: Status 404 returned error can't find the container with id 2bc8b849b122a7730fa01a1b41be00484fc37b45ca3eade13c65c0075266d683 Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.589090 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.589525 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.089509595 +0000 UTC m=+143.193418246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.712835 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.713131 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.213112648 +0000 UTC m=+143.317021289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.746748 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.747623 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.815918 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.817075 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.317061469 +0000 UTC m=+143.420970110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.877134 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-87m2r" podStartSLOduration=118.877116925 podStartE2EDuration="1m58.877116925s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:24.852303158 +0000 UTC m=+142.956211799" watchObservedRunningTime="2025-12-12 15:23:24.877116925 +0000 UTC m=+142.981025566" Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.877879 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" podStartSLOduration=118.877874725 podStartE2EDuration="1m58.877874725s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:24.875563554 +0000 UTC m=+142.979472205" watchObservedRunningTime="2025-12-12 15:23:24.877874725 +0000 UTC m=+142.981783366" Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.917274 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:24 crc kubenswrapper[5099]: E1212 15:23:24.917713 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.417695043 +0000 UTC m=+143.521603684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.928362 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:24 crc kubenswrapper[5099]: I1212 15:23:24.950487 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-9jxc6" podStartSLOduration=118.950467458 podStartE2EDuration="1m58.950467458s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:24.945268872 +0000 UTC m=+143.049177523" watchObservedRunningTime="2025-12-12 15:23:24.950467458 +0000 UTC m=+143.054376099" Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.019241 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.019642 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.519627341 +0000 UTC m=+143.623535982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.032898 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" podStartSLOduration=119.032877267 podStartE2EDuration="1m59.032877267s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:24.962975584 +0000 UTC m=+143.066884235" watchObservedRunningTime="2025-12-12 15:23:25.032877267 +0000 UTC m=+143.136785908" Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.131839 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.132461 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.632440423 +0000 UTC m=+143.736349064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.233073 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.233421 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.733407916 +0000 UTC m=+143.837316557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.244779 5099 generic.go:358] "Generic (PLEG): container finished" podID="44e6a852-c3c4-48d1-b80b-fb3c0897af7c" containerID="49d5d95815f49c0d7989d272e09d2e2cec20e32d88cae0a04bafc9e4caeb401b" exitCode=0 Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.244886 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" event={"ID":"44e6a852-c3c4-48d1-b80b-fb3c0897af7c","Type":"ContainerDied","Data":"49d5d95815f49c0d7989d272e09d2e2cec20e32d88cae0a04bafc9e4caeb401b"} Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.248399 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" event={"ID":"f241e2a0-3e8f-45a4-805e-729b31ed6add","Type":"ContainerStarted","Data":"c46b6726ec771e740a33b5d22b53091cb53e83d21ce85a4f35d48cd8bb5b1ecb"} Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.256358 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" event={"ID":"afe5f347-a3d1-4f77-afb7-f490ae797422","Type":"ContainerStarted","Data":"2bc8b849b122a7730fa01a1b41be00484fc37b45ca3eade13c65c0075266d683"} Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.333964 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.334108 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.834076811 +0000 UTC m=+143.937985452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.334309 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.334771 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.834758659 +0000 UTC m=+143.938667320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.354396 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.354465 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.435176 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.435939 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:25.935915737 +0000 UTC m=+144.039824378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.470463 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.540215 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.540684 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg"] Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.544469 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.044446947 +0000 UTC m=+144.148355588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.606201 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.641109 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.642061 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.142038032 +0000 UTC m=+144.245946673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.642094 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-k2mn9"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.684623 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.687926 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.690951 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z"] Dec 12 15:23:25 crc kubenswrapper[5099]: W1212 15:23:25.725437 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413d84ff_aa8f_43d9_85c8_873b9c7855da.slice/crio-af2aecbc056c27f9a02dfb87878613044be6af1507161fbb56834facacc8b2ed WatchSource:0}: Error finding container af2aecbc056c27f9a02dfb87878613044be6af1507161fbb56834facacc8b2ed: Status 404 returned error can't find the container with id af2aecbc056c27f9a02dfb87878613044be6af1507161fbb56834facacc8b2ed Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.734393 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v"] Dec 12 15:23:25 crc kubenswrapper[5099]: W1212 15:23:25.736619 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38193cbf_c891_4f9a_910f_2d7333064556.slice/crio-cba67ce91e7e0c6bfeeefbd9ee9a8179b6cd68cdf9161c17d50405053fee9d40 WatchSource:0}: Error finding container cba67ce91e7e0c6bfeeefbd9ee9a8179b6cd68cdf9161c17d50405053fee9d40: Status 404 returned error can't find the container with id cba67ce91e7e0c6bfeeefbd9ee9a8179b6cd68cdf9161c17d50405053fee9d40 Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.745735 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.746105 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.246093415 +0000 UTC m=+144.350002056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.797057 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.845564 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmnlp"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.847496 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.849422 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.349399419 +0000 UTC m=+144.453308060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.856958 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.886163 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.886581 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc"] Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.920828 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6k74d" Dec 12 15:23:25 crc kubenswrapper[5099]: I1212 15:23:25.960171 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:25 crc kubenswrapper[5099]: E1212 15:23:25.960660 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.460620509 +0000 UTC m=+144.564529150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.002481 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jhxlt" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.025061 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-gmbdh"] Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.041652 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h686" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.057907 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-lgbxs"] Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.059944 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.062328 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.064731 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.564707734 +0000 UTC m=+144.668616375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.175970 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.182672 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.682626469 +0000 UTC m=+144.786535110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.231389 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.283369 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.284838 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.784810714 +0000 UTC m=+144.888719355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.367071 5099 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-48xth container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": context deadline exceeded" start-of-body= Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.367161 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": context deadline exceeded" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.386194 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.387679 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.887651205 +0000 UTC m=+144.991559916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.391845 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" event={"ID":"bc075443-c3a4-468b-a2db-32223eb9093b","Type":"ContainerStarted","Data":"ca26796efb9b123ecefc6c3bf05f4bd982d8aed5b107da2a33cafabdadef6673"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.492348 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.493285 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:26.993257979 +0000 UTC m=+145.097166620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.496378 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" event={"ID":"91b03c8a-85d9-4774-bdfe-87d41eace7ca","Type":"ContainerStarted","Data":"c0bee22d7dcd5566025c937f8a7ecedf0d573bb62f1c0d321d88091e5342cb90"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.501529 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.505164 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z"] Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.505971 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" event={"ID":"6555fce5-2fef-4e2f-b724-ef8eb4103b7a","Type":"ContainerStarted","Data":"e83c5acada3d4998067298b611dec0d62bd9232fb8a167b3c3ae831e209b76f1"} Dec 12 15:23:26 crc kubenswrapper[5099]: W1212 15:23:26.551635 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e63fcef_7121_4025_b914_bb1ee37e8d5a.slice/crio-4d27575b7c90e804f442c7c996b29a4d6d4cd3b8e2cac443f2a5ec9a0cb5e533 WatchSource:0}: Error finding container 4d27575b7c90e804f442c7c996b29a4d6d4cd3b8e2cac443f2a5ec9a0cb5e533: Status 404 returned error can't find the container with id 4d27575b7c90e804f442c7c996b29a4d6d4cd3b8e2cac443f2a5ec9a0cb5e533 Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.560506 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" podStartSLOduration=120.560474672 podStartE2EDuration="2m0.560474672s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:26.560295307 +0000 UTC m=+144.664203948" watchObservedRunningTime="2025-12-12 15:23:26.560474672 +0000 UTC m=+144.664383313" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.566389 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" event={"ID":"174ac316-3890-4143-b377-559d8d137c5c","Type":"ContainerStarted","Data":"ee1a4d27c95179107f0ae2b1ef5643f85381f7f27d8f8dbe65bcdb504b4db7c4"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.584423 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" event={"ID":"613e1678-fee7-4592-a8e1-36a5454d482c","Type":"ContainerStarted","Data":"8484bf16a80c32a5e2b76df74dc1edb6820ef5b3f35208487773d50d9130a3db"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.588192 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv"] Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.592965 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" event={"ID":"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b","Type":"ContainerStarted","Data":"69d6f5dd3274435a1a46ead10f6e4ba74c4d3cfd715f641858a97ca48d4faaa5"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.597352 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.597716 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.097702213 +0000 UTC m=+145.201610854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.649732 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podStartSLOduration=120.649709569 podStartE2EDuration="2m0.649709569s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:26.647498431 +0000 UTC m=+144.751407082" watchObservedRunningTime="2025-12-12 15:23:26.649709569 +0000 UTC m=+144.753618210" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.698860 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.700472 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.200445792 +0000 UTC m=+145.304354433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.729078 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" event={"ID":"dafca5e1-81a4-4904-98bd-a054d15d3afd","Type":"ContainerStarted","Data":"33c297c5c7988762a9c438c8d6b621e1150b316193e5b690a889fb8f03a01754"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.734427 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" event={"ID":"b22ffdfa-22ac-494c-b7ed-e12a8a159b9a","Type":"ContainerStarted","Data":"6a96a92db98a655eaa6edad9eae98374f04946415120c0f379f48503aa83d924"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.735598 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" event={"ID":"effa37fd-72fe-49ed-99b8-e190b0115c26","Type":"ContainerStarted","Data":"06a6d927d83dab8f7454e650773c1fb0038fe2812c1fbc176b6b53529aab7bb6"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.736447 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" event={"ID":"413d84ff-aa8f-43d9-85c8-873b9c7855da","Type":"ContainerStarted","Data":"af2aecbc056c27f9a02dfb87878613044be6af1507161fbb56834facacc8b2ed"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.768205 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-9bfsz" event={"ID":"c8619046-af5b-41ea-b50f-ed757800ab47","Type":"ContainerStarted","Data":"5d06d5e4c8babd93fd355d9b9fcacf71526f3cc4ae57676c508487c97be3e79b"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.768788 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.770586 5099 patch_prober.go:28] interesting pod/console-operator-67c89758df-9bfsz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.770637 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-9bfsz" podUID="c8619046-af5b-41ea-b50f-ed757800ab47" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.800292 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" event={"ID":"c8810bcc-64c4-4f66-aa84-7f195007922e","Type":"ContainerStarted","Data":"256782a54e016931ccf64dd798966badf203a1a022f17e985d25de0d688ae1e2"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.801089 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.801669 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.301648551 +0000 UTC m=+145.405557192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.805563 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" event={"ID":"38193cbf-c891-4f9a-910f-2d7333064556","Type":"ContainerStarted","Data":"cba67ce91e7e0c6bfeeefbd9ee9a8179b6cd68cdf9161c17d50405053fee9d40"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.807185 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" event={"ID":"3dde3606-4e50-4c3e-8b0d-baa57e43c41c","Type":"ContainerStarted","Data":"c3dd37ff80bb4da0b2365f4080fef4b8a80f93280a5e428b586b6265223170d0"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.816569 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" event={"ID":"c43456a5-e138-46ad-bbc2-1b7c25526806","Type":"ContainerStarted","Data":"25b37e3f1465e9abe20e0b4d027225546b6ace4de9aeb3623fd1f094b8b254f6"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.817582 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" event={"ID":"21cff6b7-1a65-4937-9d61-00c599278e4c","Type":"ContainerStarted","Data":"af2afff4a9eb2bc3b794ca91e63f9d774fad353b9395895e3932694d2b747464"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.907162 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.907373 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.407324687 +0000 UTC m=+145.511233328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.908721 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:26 crc kubenswrapper[5099]: E1212 15:23:26.909367 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.4093503 +0000 UTC m=+145.513258941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.915390 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-9bfsz" podStartSLOduration=120.915359756 podStartE2EDuration="2m0.915359756s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:26.906310461 +0000 UTC m=+145.010219102" watchObservedRunningTime="2025-12-12 15:23:26.915359756 +0000 UTC m=+145.019268397" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.919097 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw"] Dec 12 15:23:26 crc kubenswrapper[5099]: W1212 15:23:26.941342 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd3fd373_c790_4c95_94ee_3fc86809aaf2.slice/crio-c69262fcbeddab10c3b7104649123148cf0e7d2fc732418088fb642161e54e6a WatchSource:0}: Error finding container c69262fcbeddab10c3b7104649123148cf0e7d2fc732418088fb642161e54e6a: Status 404 returned error can't find the container with id c69262fcbeddab10c3b7104649123148cf0e7d2fc732418088fb642161e54e6a Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.948102 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vtqng" podStartSLOduration=120.94808201 podStartE2EDuration="2m0.94808201s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:26.941922539 +0000 UTC m=+145.045831200" watchObservedRunningTime="2025-12-12 15:23:26.94808201 +0000 UTC m=+145.051990651" Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.973418 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" event={"ID":"3c6ceb95-e500-4c75-b79f-135276dd6854","Type":"ContainerStarted","Data":"9f1f45ba229a851303108df2b02fb330ca72e76108abf121160593b2c00c7689"} Dec 12 15:23:26 crc kubenswrapper[5099]: I1212 15:23:26.987901 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tpqns" event={"ID":"be3e8066-7769-4174-b1af-e18146cd80c0","Type":"ContainerStarted","Data":"b0dd165a61a65645a688dcb306d7af3f13471757871bdaab60fcbf1577df8480"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.010752 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.011093 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.511071832 +0000 UTC m=+145.614980463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.015300 5099 generic.go:358] "Generic (PLEG): container finished" podID="900d9c74-c186-4e65-8d3d-5f282ced8617" containerID="adf79f0f4eaad1ef130e0c22d3ad486e92e69006452555100a2b944b1ef9da73" exitCode=0 Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.015448 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" event={"ID":"900d9c74-c186-4e65-8d3d-5f282ced8617","Type":"ContainerDied","Data":"adf79f0f4eaad1ef130e0c22d3ad486e92e69006452555100a2b944b1ef9da73"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.018314 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-6ldwt" podStartSLOduration=121.01828322 podStartE2EDuration="2m1.01828322s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:27.017756056 +0000 UTC m=+145.121664707" watchObservedRunningTime="2025-12-12 15:23:27.01828322 +0000 UTC m=+145.122191861" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.018582 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-kntm5" podStartSLOduration=121.018577188 podStartE2EDuration="2m1.018577188s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:26.968455121 +0000 UTC m=+145.072363762" watchObservedRunningTime="2025-12-12 15:23:27.018577188 +0000 UTC m=+145.122485829" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.040346 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-xpdmq" event={"ID":"aa68d84c-a712-4979-afe1-bdb4f8329372","Type":"ContainerStarted","Data":"3f6c7c3d1b0f99e11763b08d9b9e8b6f74b6139488d6c875c479217fcac3d178"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.071119 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-lgbxs" event={"ID":"de54b467-5c8f-470a-9fe9-c54105ff38e2","Type":"ContainerStarted","Data":"bb60b75c7445511e8e65477038f25f7d2c0d368cd966cfcf985e91029dfe4224"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.113242 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.132876 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.632854428 +0000 UTC m=+145.736763069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.241279 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tpqns" podStartSLOduration=121.241253005 podStartE2EDuration="2m1.241253005s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:27.07158323 +0000 UTC m=+145.175491871" watchObservedRunningTime="2025-12-12 15:23:27.241253005 +0000 UTC m=+145.345161656" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.242694 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" event={"ID":"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d","Type":"ContainerStarted","Data":"6c9bc29bcf39d6878396dcd6ca8ac3d1138c9c2c82684433626f432cbc726b52"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.242765 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.243569 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.244073 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.744055837 +0000 UTC m=+145.847964478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.331880 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-xpdmq" podStartSLOduration=121.331862118 podStartE2EDuration="2m1.331862118s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:27.331843467 +0000 UTC m=+145.435752118" watchObservedRunningTime="2025-12-12 15:23:27.331862118 +0000 UTC m=+145.435770759" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.335496 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" event={"ID":"c42235e4-1a23-419b-bf5e-1e7f25ee251d","Type":"ContainerStarted","Data":"f0a7cbbb026776516050f1d7b01646d8d0f0a586786953f7e62a54f4eb8eb5f2"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.347537 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.347884 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:27.847871284 +0000 UTC m=+145.951779925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.350299 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" event={"ID":"5002e2d3-b94c-4d2d-a391-9f84e63ffd20","Type":"ContainerStarted","Data":"25701d057f5a7388a24f6bc637febcaa24fd0fdf2f6b7851cfdfabe1a6edce24"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.353999 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:27 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:27 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:27 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.354043 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.397623 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" event={"ID":"7726b6bd-36f7-4478-91ad-1fa75a6da808","Type":"ContainerStarted","Data":"026279445c1979a03558edd2b92733a2c00dd1dab919ae13f8896ce2332264f4"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.404408 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" event={"ID":"0fd8dbd3-0ba2-43fb-8364-7d98167be1b8","Type":"ContainerStarted","Data":"e91915bc47815b8bd509d6ab66fd1f697a8e1dae8e18ad57bd27022a376d1340"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.406174 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" event={"ID":"1814fed8-acf1-4395-86f9-24219f084d55","Type":"ContainerStarted","Data":"e880e2462477507a0d502ca31d9e03513954832f2706b9d436fe8719d07997aa"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.407819 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" event={"ID":"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6","Type":"ContainerStarted","Data":"47f14df38d58da622a9ce05a12f102e313d677e4d2e7af231dcd2de351fc33e5"} Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.408730 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.408763 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.521418 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.522274 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.022242611 +0000 UTC m=+146.126151272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.527514 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.528716 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.02869288 +0000 UTC m=+146.132601521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.542998 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6k74d"] Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.633645 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.633743 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.133724208 +0000 UTC m=+146.237632849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.633849 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.634127 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.134119649 +0000 UTC m=+146.238028290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.664292 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cvp2b" podStartSLOduration=121.664264065 podStartE2EDuration="2m1.664264065s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:27.545912788 +0000 UTC m=+145.649821429" watchObservedRunningTime="2025-12-12 15:23:27.664264065 +0000 UTC m=+145.768172716" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.665922 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-2rzj9" podStartSLOduration=121.665910428 podStartE2EDuration="2m1.665910428s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:27.661949504 +0000 UTC m=+145.765858155" watchObservedRunningTime="2025-12-12 15:23:27.665910428 +0000 UTC m=+145.769819069" Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.735810 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.736006 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.235980525 +0000 UTC m=+146.339889166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.736164 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.736559 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.23654878 +0000 UTC m=+146.340457421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.844917 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.845103 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.34506901 +0000 UTC m=+146.448977671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.845498 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.845878 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.34586126 +0000 UTC m=+146.449769901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: W1212 15:23:27.885242 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod092012fd_8c87_44d7_92dc_83036f270c8c.slice/crio-44020d705d29f5151b3541009a9dbee489be614ec3676a6dfa268c320f8c6059 WatchSource:0}: Error finding container 44020d705d29f5151b3541009a9dbee489be614ec3676a6dfa268c320f8c6059: Status 404 returned error can't find the container with id 44020d705d29f5151b3541009a9dbee489be614ec3676a6dfa268c320f8c6059 Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.952593 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:27 crc kubenswrapper[5099]: E1212 15:23:27.952889 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.45285173 +0000 UTC m=+146.556760371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:27 crc kubenswrapper[5099]: I1212 15:23:27.996918 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h686"] Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.315152 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:28 crc kubenswrapper[5099]: E1212 15:23:28.315558 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.815545226 +0000 UTC m=+146.919453867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.360150 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:28 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:28 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:28 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.360244 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.416906 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:28 crc kubenswrapper[5099]: E1212 15:23:28.417273 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:28.917254481 +0000 UTC m=+147.021163122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.419841 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" event={"ID":"32e575ce-a859-4f31-a407-adf15ebb80bd","Type":"ContainerStarted","Data":"01dab109f884584803dd0b7cefe0b8aa9d4d5fd98fa04ca38747ed497fd0aa05"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.425323 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" event={"ID":"5880f9c7-9f93-49ac-9a5f-1fbe457edb1b","Type":"ContainerStarted","Data":"2ff05c9347d49873efd606c2ed54a5c66930c13f328c90268cf17afd621904cf"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.427462 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" event={"ID":"38193cbf-c891-4f9a-910f-2d7333064556","Type":"ContainerStarted","Data":"9ef1aa937b08db073144323d9e44cc6b2f1137872aa0bf28cb1bb06edd104776"} Dec 12 15:23:28 crc kubenswrapper[5099]: W1212 15:23:28.428884 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3adb4515_a2d2_4849_a626_81443f61d9d2.slice/crio-14713590ebd8144621c661f5f834eab8802f219b85a3eed93d38a5c2d793f66c WatchSource:0}: Error finding container 14713590ebd8144621c661f5f834eab8802f219b85a3eed93d38a5c2d793f66c: Status 404 returned error can't find the container with id 14713590ebd8144621c661f5f834eab8802f219b85a3eed93d38a5c2d793f66c Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.435978 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jhxlt" event={"ID":"1e63fcef-7121-4025-b914-bb1ee37e8d5a","Type":"ContainerStarted","Data":"4d27575b7c90e804f442c7c996b29a4d6d4cd3b8e2cac443f2a5ec9a0cb5e533"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.441269 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" event={"ID":"c43456a5-e138-46ad-bbc2-1b7c25526806","Type":"ContainerStarted","Data":"861c9944f99e4b3bfa675c0bade412ab187735c58f5348de4c3bf4df5a0e1625"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.455844 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" event={"ID":"f241e2a0-3e8f-45a4-805e-729b31ed6add","Type":"ContainerStarted","Data":"fce5df51b542ba3814052cdee12cf03b759e6de16a177c40332bdb514d06393d"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.518254 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:28 crc kubenswrapper[5099]: E1212 15:23:28.518655 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.018632388 +0000 UTC m=+147.122541029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.520474 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" event={"ID":"afe5f347-a3d1-4f77-afb7-f490ae797422","Type":"ContainerStarted","Data":"82a0033f3fbedac0cc1e951eb18d30848f484a09f2e6a2ed52b968a8ff7a7b5b"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.520645 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" event={"ID":"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317","Type":"ContainerStarted","Data":"617d9b6b2dbbc36d3965b1cb60f18e3c18f3991a2e537a419c0442a7a6635550"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.529020 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-6g4jq" podStartSLOduration=122.528996088 podStartE2EDuration="2m2.528996088s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:28.490823132 +0000 UTC m=+146.594731803" watchObservedRunningTime="2025-12-12 15:23:28.528996088 +0000 UTC m=+146.632904739" Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.530173 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" podStartSLOduration=122.530164179 podStartE2EDuration="2m2.530164179s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:28.528971467 +0000 UTC m=+146.632880128" watchObservedRunningTime="2025-12-12 15:23:28.530164179 +0000 UTC m=+146.634072820" Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.578222 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" event={"ID":"cd3fd373-c790-4c95-94ee-3fc86809aaf2","Type":"ContainerStarted","Data":"c69262fcbeddab10c3b7104649123148cf0e7d2fc732418088fb642161e54e6a"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.815336 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:28 crc kubenswrapper[5099]: E1212 15:23:28.815878 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.315847357 +0000 UTC m=+147.419755998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.819490 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:28 crc kubenswrapper[5099]: E1212 15:23:28.819937 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.319921103 +0000 UTC m=+147.423829744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.831196 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s9spr" podStartSLOduration=122.831174987 podStartE2EDuration="2m2.831174987s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:28.830125399 +0000 UTC m=+146.934034050" watchObservedRunningTime="2025-12-12 15:23:28.831174987 +0000 UTC m=+146.935083628" Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.854191 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6756k" podStartSLOduration=122.854165147 podStartE2EDuration="2m2.854165147s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:28.852715569 +0000 UTC m=+146.956624210" watchObservedRunningTime="2025-12-12 15:23:28.854165147 +0000 UTC m=+146.958073798" Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.864368 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" event={"ID":"9fbc0f6e-a03e-414c-8f95-4bc036fac71b","Type":"ContainerStarted","Data":"bc72143c336438804d305bdfa2264ed4fc89953d808810af010049c05f46e662"} Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.973118 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:28 crc kubenswrapper[5099]: E1212 15:23:28.973776 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.473713308 +0000 UTC m=+147.577621959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:28 crc kubenswrapper[5099]: I1212 15:23:28.979396 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6k74d" event={"ID":"092012fd-8c87-44d7-92dc-83036f270c8c","Type":"ContainerStarted","Data":"44020d705d29f5151b3541009a9dbee489be614ec3676a6dfa268c320f8c6059"} Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.001792 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tkw9f"] Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.074350 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.074932 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.57490906 +0000 UTC m=+147.678817711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.171424 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:29 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:29 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:29 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.171646 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.180814 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.181008 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.680974176 +0000 UTC m=+147.784882827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.181758 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.182253 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.682237999 +0000 UTC m=+147.786146640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.282586 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.282785 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.782754481 +0000 UTC m=+147.886663132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.282974 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.283374 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.783357106 +0000 UTC m=+147.887265807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.384185 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.384344 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.884320459 +0000 UTC m=+147.988229100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.384425 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.384882 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.884863973 +0000 UTC m=+147.988772614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.487231 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.987195592 +0000 UTC m=+148.091104243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.489732 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.492258 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.498447 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:29.998426095 +0000 UTC m=+148.102334746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.626707 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.629730 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:30.129650536 +0000 UTC m=+148.233559177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.639195 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-9bfsz" Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.737210 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.742212 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:30.242191091 +0000 UTC m=+148.346099732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.866204 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.866305 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.893245 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.893916 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:30.393892037 +0000 UTC m=+148.497800678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:29 crc kubenswrapper[5099]: I1212 15:23:29.994470 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:29 crc kubenswrapper[5099]: E1212 15:23:29.994853 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:30.494835869 +0000 UTC m=+148.598744510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.025744 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" event={"ID":"b22ffdfa-22ac-494c-b7ed-e12a8a159b9a","Type":"ContainerStarted","Data":"b23ab9342a5653d5c82b7f1db952f8feef8aeee7906b4e51d19012e1fb581e1f"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.077994 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" event={"ID":"effa37fd-72fe-49ed-99b8-e190b0115c26","Type":"ContainerStarted","Data":"dd4655a1ec9af8bc9fd9d06aa0d03abd012676de067f25811592a701986164f8"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.105295 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.106480 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:30.60642931 +0000 UTC m=+148.710337951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.108199 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nvw9n" podStartSLOduration=124.108185255 podStartE2EDuration="2m4.108185255s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:30.106968784 +0000 UTC m=+148.210877425" watchObservedRunningTime="2025-12-12 15:23:30.108185255 +0000 UTC m=+148.212093896" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.156148 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-c5k7s" podStartSLOduration=124.156127916 podStartE2EDuration="2m4.156127916s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:30.155077458 +0000 UTC m=+148.258986099" watchObservedRunningTime="2025-12-12 15:23:30.156127916 +0000 UTC m=+148.260036557" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.207536 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.207939 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:30.707925306 +0000 UTC m=+148.811833947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.210860 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:30 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:30 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:30 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.210910 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.235492 5099 ???:1] "http: TLS handshake error from 192.168.126.11:54386: no serving certificate available for the kubelet" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.277006 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" event={"ID":"1814fed8-acf1-4395-86f9-24219f084d55","Type":"ContainerStarted","Data":"dcb26dc23868f71442e3ea50869895a8402097901fdf2b689a1e80f37d2882f0"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.279147 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.279364 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.294625 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" event={"ID":"bc075443-c3a4-468b-a2db-32223eb9093b","Type":"ContainerStarted","Data":"66a096576670420d5e534d1c16ff49b49416906a3fb8402fc28c3d18dec2780e"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.304332 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jhxlt" event={"ID":"1e63fcef-7121-4025-b914-bb1ee37e8d5a","Type":"ContainerStarted","Data":"4318e42b4c9df6c7f96ba5824dbda22e09df053402b4c60ecdbf2d83a23f4ae9"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.308629 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.309610 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:30.809590997 +0000 UTC m=+148.913499638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.311436 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-lgbxs" event={"ID":"de54b467-5c8f-470a-9fe9-c54105ff38e2","Type":"ContainerStarted","Data":"22cb0e97650bf2077a998aae8c205635e9605003278f836f2d0cda3b5750bd01"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.342128 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.351206 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-zkw7z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.351288 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" podUID="1814fed8-acf1-4395-86f9-24219f084d55" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.353763 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" event={"ID":"6555fce5-2fef-4e2f-b724-ef8eb4103b7a","Type":"ContainerStarted","Data":"08eb2179583a07d6d30c2b91c7453eb6522e50ed650f30f2c6bbd0a1c4bec09f"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.354772 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.358008 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" event={"ID":"be1d4c1f-99d8-40f2-b6d5-d6b7aec07317","Type":"ContainerStarted","Data":"88f927ed1c33c9dc1ffaad84632c9f9ceae206a9152a55dce2acb4ac8e1d44cd"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.358899 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmnlp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.358975 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.359072 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.360224 5099 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-dmmt7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.360266 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" podUID="6555fce5-2fef-4e2f-b724-ef8eb4103b7a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.363827 5099 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-wn5sv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.363940 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" podUID="be1d4c1f-99d8-40f2-b6d5-d6b7aec07317" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.423915 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.433034 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tkw9f" event={"ID":"6c20a46e-1da7-4233-b89a-5029f754b132","Type":"ContainerStarted","Data":"4ab96211aea6d4a2b3d80fcf6406d6dcd3134749bb094d7dcd9c6afef8368698"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.442010 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" event={"ID":"c42235e4-1a23-419b-bf5e-1e7f25ee251d","Type":"ContainerStarted","Data":"4ade89bec68d04e8c82fdb10d77d762135b1bbfab4f2b84b3ef67f153ada5986"} Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.444331 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:30.944315381 +0000 UTC m=+149.048224022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.453962 5099 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-jf6f9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.454033 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" podUID="91b03c8a-85d9-4774-bdfe-87d41eace7ca" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.529848 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.538152 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.038111877 +0000 UTC m=+149.142020518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.542078 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podStartSLOduration=124.5420651 podStartE2EDuration="2m4.5420651s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:30.541024723 +0000 UTC m=+148.644933364" watchObservedRunningTime="2025-12-12 15:23:30.5420651 +0000 UTC m=+148.645973741" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.542405 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" podStartSLOduration=124.542400488 podStartE2EDuration="2m4.542400488s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:30.457638698 +0000 UTC m=+148.561547339" watchObservedRunningTime="2025-12-12 15:23:30.542400488 +0000 UTC m=+148.646309129" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.600248 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.600291 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h686" event={"ID":"3adb4515-a2d2-4849-a626-81443f61d9d2","Type":"ContainerStarted","Data":"14713590ebd8144621c661f5f834eab8802f219b85a3eed93d38a5c2d793f66c"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.600312 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.617857 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-xpdmq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.617935 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-xpdmq" podUID="aa68d84c-a712-4979-afe1-bdb4f8329372" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.631325 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.631613 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.131599075 +0000 UTC m=+149.235507716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.661143 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" event={"ID":"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6","Type":"ContainerStarted","Data":"498fd9f932804a3098789210c1f9905308a7c5360c2a2352e32261060e3532c4"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.661349 5099 ???:1] "http: TLS handshake error from 192.168.126.11:41930: no serving certificate available for the kubelet" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.705815 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" event={"ID":"32e575ce-a859-4f31-a407-adf15ebb80bd","Type":"ContainerStarted","Data":"1bd3327181f1f73b4bd70c3bb6f80f119f48bcc1f6ed24c173ee46cf617c40ff"} Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.728103 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" podStartSLOduration=124.728086541 podStartE2EDuration="2m4.728086541s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:30.725412301 +0000 UTC m=+148.829320942" watchObservedRunningTime="2025-12-12 15:23:30.728086541 +0000 UTC m=+148.831995182" Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.732023 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.732419 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.232401873 +0000 UTC m=+149.336310514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.833357 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.833910 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.333881019 +0000 UTC m=+149.437789660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.983145 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.983326 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.483295105 +0000 UTC m=+149.587203746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:30 crc kubenswrapper[5099]: I1212 15:23:30.983920 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:30 crc kubenswrapper[5099]: E1212 15:23:30.984482 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.484465425 +0000 UTC m=+149.588374066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.084749 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.085033 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.584994287 +0000 UTC m=+149.688902938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.151040 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:31 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:31 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:31 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.151108 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.186496 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.186962 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.686935275 +0000 UTC m=+149.790843916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.287068 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.287230 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.78720731 +0000 UTC m=+149.891115961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.287376 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.287765 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.787753914 +0000 UTC m=+149.891662565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.388753 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.389018 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.888999474 +0000 UTC m=+149.992908115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.490628 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.491149 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:31.991125077 +0000 UTC m=+150.095033728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.592521 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.592742 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.092720987 +0000 UTC m=+150.196629628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.592825 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.593111 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.093102307 +0000 UTC m=+150.197010948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.694580 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.694854 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.194824189 +0000 UTC m=+150.298732830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.695237 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.695879 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.195853416 +0000 UTC m=+150.299762097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.729132 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" event={"ID":"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d","Type":"ContainerStarted","Data":"a7f5aa33a7ac8745337142697b31b6427e49712f17ae308468a66abf8bdbb247"} Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.731601 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xmdmg" event={"ID":"413d84ff-aa8f-43d9-85c8-873b9c7855da","Type":"ContainerStarted","Data":"49ab46cd5bc9487069194a145271b18137b9443c73417605923c96be12b00d03"} Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.733457 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" event={"ID":"dafca5e1-81a4-4904-98bd-a054d15d3afd","Type":"ContainerStarted","Data":"d3f0d550acdea0af67ac21a9130b1f31a6e122848c933944e91338188a64f9c2"} Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.734977 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" event={"ID":"cd3fd373-c790-4c95-94ee-3fc86809aaf2","Type":"ContainerStarted","Data":"f98bec3dedc830a23b3577758c587c49b439f650a5ac20e7000eaa18af02a666"} Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.737031 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" event={"ID":"900d9c74-c186-4e65-8d3d-5f282ced8617","Type":"ContainerStarted","Data":"71a4911b242ba143f8ef67a150460a2c45c24b943fcc180a272c2c51a43b997d"} Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.738449 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" event={"ID":"613e1678-fee7-4592-a8e1-36a5454d482c","Type":"ContainerStarted","Data":"2dd85454fb351746afd39a4c4e089dd3d3d57e54cfeb37974e2d102ead27673b"} Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.740589 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" event={"ID":"21cff6b7-1a65-4937-9d61-00c599278e4c","Type":"ContainerStarted","Data":"d8cbb70285f3f8505d698d716c74a9d5f8c513dc5294559775493c7dd8dbbb73"} Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.796845 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.796986 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.296964833 +0000 UTC m=+150.400873474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.797069 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.797373 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.297363233 +0000 UTC m=+150.401271874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:31 crc kubenswrapper[5099]: I1212 15:23:31.898116 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:31 crc kubenswrapper[5099]: E1212 15:23:31.898446 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.398408578 +0000 UTC m=+150.502317219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:31.999866 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.000490 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.500398148 +0000 UTC m=+150.604306809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.100980 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.101178 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.601147355 +0000 UTC m=+150.705055996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.101526 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.101959 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.601942766 +0000 UTC m=+150.705851407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.147211 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.149282 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:32 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:32 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:32 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.149372 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.203016 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.203223 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.703171996 +0000 UTC m=+150.807080647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.203372 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.203892 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.703869184 +0000 UTC m=+150.807777845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.285592 5099 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-dmmt7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.285751 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" podUID="6555fce5-2fef-4e2f-b724-ef8eb4103b7a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.304539 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.304750 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.804717854 +0000 UTC m=+150.908626495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.305224 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.305621 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.805605107 +0000 UTC m=+150.909513758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.406888 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.407056 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.907028882 +0000 UTC m=+151.010937543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.407185 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.407741 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:32.90772231 +0000 UTC m=+151.011630951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.447355 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-zkw7z container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.447444 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" podUID="1814fed8-acf1-4395-86f9-24219f084d55" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.491501 5099 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-jf6f9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded" start-of-body= Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.491610 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" podUID="91b03c8a-85d9-4774-bdfe-87d41eace7ca" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded" Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.508562 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.508836 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.008799195 +0000 UTC m=+151.112707836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.509042 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.509385 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.0093717 +0000 UTC m=+151.113280341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.609882 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.610039 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.110019025 +0000 UTC m=+151.213927656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.610237 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.610764 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.110754104 +0000 UTC m=+151.214662745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.711848 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.711962 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.211938403 +0000 UTC m=+151.315847044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.712152 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.712799 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.212783905 +0000 UTC m=+151.316692556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.732418 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmnlp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.732617 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 15:23:32 crc kubenswrapper[5099]: I1212 15:23:32.912910 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:32 crc kubenswrapper[5099]: E1212 15:23:32.913228 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.413211964 +0000 UTC m=+151.517120605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.014772 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.015215 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.515196911 +0000 UTC m=+151.619105552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.116479 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.116616 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.616593095 +0000 UTC m=+151.720501746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.116930 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.117366 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.617349375 +0000 UTC m=+151.721258026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.149731 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:33 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:33 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:33 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.149836 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.217637 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.217801 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.717778644 +0000 UTC m=+151.821687295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.217935 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.218283 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.718272056 +0000 UTC m=+151.822180697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.472865 5099 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-jf6f9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded" start-of-body= Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.472992 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" podUID="91b03c8a-85d9-4774-bdfe-87d41eace7ca" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.473156 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.473511 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:33.973496018 +0000 UTC m=+152.077404659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.574586 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.575042 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.07502222 +0000 UTC m=+152.178930861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.654697 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-lgbxs" podStartSLOduration=127.654650476 podStartE2EDuration="2m7.654650476s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:30.79362626 +0000 UTC m=+148.897534901" watchObservedRunningTime="2025-12-12 15:23:33.654650476 +0000 UTC m=+151.758559117" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.676334 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.676548 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.176504276 +0000 UTC m=+152.280412917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.676961 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.677347 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.177329657 +0000 UTC m=+152.281238298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.700502 5099 ???:1] "http: TLS handshake error from 192.168.126.11:41940: no serving certificate available for the kubelet" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.764596 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-zkw7z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.764720 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" podUID="1814fed8-acf1-4395-86f9-24219f084d55" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.767282 5099 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-wn5sv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.767343 5099 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-dmmt7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.767374 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" podUID="be1d4c1f-99d8-40f2-b6d5-d6b7aec07317" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.767453 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" podUID="6555fce5-2fef-4e2f-b724-ef8eb4103b7a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.768556 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmnlp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.768591 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.798137 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.800119 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.300097919 +0000 UTC m=+152.404006560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.870349 5099 ???:1] "http: TLS handshake error from 192.168.126.11:41956: no serving certificate available for the kubelet" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.877575 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" podStartSLOduration=127.877556699 podStartE2EDuration="2m7.877556699s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:33.76257301 +0000 UTC m=+151.866481671" watchObservedRunningTime="2025-12-12 15:23:33.877556699 +0000 UTC m=+151.981465350" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.895829 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-z2k9v" podStartSLOduration=127.895814895 podStartE2EDuration="2m7.895814895s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:33.893800902 +0000 UTC m=+151.997709543" watchObservedRunningTime="2025-12-12 15:23:33.895814895 +0000 UTC m=+151.999723526" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.901782 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:33 crc kubenswrapper[5099]: E1212 15:23:33.902057 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.402045307 +0000 UTC m=+152.505953948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.931064 5099 ???:1] "http: TLS handshake error from 192.168.126.11:41972: no serving certificate available for the kubelet" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.932198 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ptfxw" podStartSLOduration=127.932179533 podStartE2EDuration="2m7.932179533s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:33.931027263 +0000 UTC m=+152.034935904" watchObservedRunningTime="2025-12-12 15:23:33.932179533 +0000 UTC m=+152.036088174" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.958070 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" podStartSLOduration=127.958042688 podStartE2EDuration="2m7.958042688s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:33.956146888 +0000 UTC m=+152.060055539" watchObservedRunningTime="2025-12-12 15:23:33.958042688 +0000 UTC m=+152.061951329" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.988889 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gtvwd" podStartSLOduration=127.988871512 podStartE2EDuration="2m7.988871512s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:33.987302901 +0000 UTC m=+152.091211542" watchObservedRunningTime="2025-12-12 15:23:33.988871512 +0000 UTC m=+152.092780143" Dec 12 15:23:33 crc kubenswrapper[5099]: I1212 15:23:33.989341 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jhxlt" podStartSLOduration=16.989335284 podStartE2EDuration="16.989335284s" podCreationTimestamp="2025-12-12 15:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:33.973808999 +0000 UTC m=+152.077717630" watchObservedRunningTime="2025-12-12 15:23:33.989335284 +0000 UTC m=+152.093243915" Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.005891 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:34 crc kubenswrapper[5099]: E1212 15:23:34.006158 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.506136722 +0000 UTC m=+152.610045363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.006249 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:34 crc kubenswrapper[5099]: E1212 15:23:34.006972 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.506948353 +0000 UTC m=+152.610856994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.008634 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pwlkx" podStartSLOduration=128.008615746 podStartE2EDuration="2m8.008615746s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:34.006229894 +0000 UTC m=+152.110138535" watchObservedRunningTime="2025-12-12 15:23:34.008615746 +0000 UTC m=+152.112524387" Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.044202 5099 ???:1] "http: TLS handshake error from 192.168.126.11:41978: no serving certificate available for the kubelet" Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.108275 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:34 crc kubenswrapper[5099]: E1212 15:23:34.108704 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.608686466 +0000 UTC m=+152.712595097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.139298 5099 ???:1] "http: TLS handshake error from 192.168.126.11:41988: no serving certificate available for the kubelet" Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.151946 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:34 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:34 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:34 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.154311 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.211025 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:34 crc kubenswrapper[5099]: E1212 15:23:34.211423 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.711407385 +0000 UTC m=+152.815316026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.238365 5099 ???:1] "http: TLS handshake error from 192.168.126.11:41990: no serving certificate available for the kubelet" Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.318841 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:34 crc kubenswrapper[5099]: E1212 15:23:34.319006 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.81897683 +0000 UTC m=+152.922885481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.319074 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:34 crc kubenswrapper[5099]: E1212 15:23:34.319430 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.819420721 +0000 UTC m=+152.923329362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:34 crc kubenswrapper[5099]: I1212 15:23:34.429583 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:34 crc kubenswrapper[5099]: E1212 15:23:34.429966 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:34.929941953 +0000 UTC m=+153.033850594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:34.531754 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:34.532124 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:35.032108647 +0000 UTC m=+153.136017288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:34.632546 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:34.632805 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:35.132788832 +0000 UTC m=+153.236697473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.115295 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.115781 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:35.615764087 +0000 UTC m=+153.719672728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.120089 5099 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-wn5sv container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.120185 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" podUID="be1d4c1f-99d8-40f2-b6d5-d6b7aec07317" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.165333 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:35 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:35 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:35 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.165468 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.216098 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.216769 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:35.71675042 +0000 UTC m=+153.820659061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.318270 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.319129 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:35.819113549 +0000 UTC m=+153.923022190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.399685 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tkw9f" event={"ID":"6c20a46e-1da7-4233-b89a-5029f754b132","Type":"ContainerStarted","Data":"3384fbef4dbd3f9678902c732a30f9ec7a47b028299140dd0ef306e391b6f313"} Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.402249 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" event={"ID":"9fbc0f6e-a03e-414c-8f95-4bc036fac71b","Type":"ContainerStarted","Data":"a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3"} Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.472714 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.472935 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:35.972907051 +0000 UTC m=+154.076815692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.473345 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.474142 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:35.974104312 +0000 UTC m=+154.078012953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.479403 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6k74d" event={"ID":"092012fd-8c87-44d7-92dc-83036f270c8c","Type":"ContainerStarted","Data":"05555a6a68c79e2d181bd2cb19004734c9e0d48c4a51f4f0336b1a8acbf14fae"} Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.509581 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-jf6f9" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.515656 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.528429 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6k74d" podStartSLOduration=18.528407797 podStartE2EDuration="18.528407797s" podCreationTimestamp="2025-12-12 15:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:35.526575729 +0000 UTC m=+153.630484370" watchObservedRunningTime="2025-12-12 15:23:35.528407797 +0000 UTC m=+153.632316468" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.533635 5099 ???:1] "http: TLS handshake error from 192.168.126.11:42000: no serving certificate available for the kubelet" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.562112 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" event={"ID":"32e575ce-a859-4f31-a407-adf15ebb80bd","Type":"ContainerStarted","Data":"1609ebed5d2dea4ba4e1b40fde1719e3840cdd03e03f4ce8a81fd5b17f783131"} Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.562756 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.574369 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.575937 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.075908526 +0000 UTC m=+154.179817167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.640530 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" event={"ID":"5002e2d3-b94c-4d2d-a391-9f84e63ffd20","Type":"ContainerStarted","Data":"d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70"} Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.641818 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.644049 5099 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-gmbdh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.644119 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" podUID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.648269 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmnlp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.648966 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.649353 5099 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-wn5sv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.649406 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" podUID="be1d4c1f-99d8-40f2-b6d5-d6b7aec07317" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.654004 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-zkw7z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.654246 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" podUID="1814fed8-acf1-4395-86f9-24219f084d55" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.694817 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.699847 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.199829397 +0000 UTC m=+154.303738038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.775645 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" podStartSLOduration=129.775628394 podStartE2EDuration="2m9.775628394s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:35.77355231 +0000 UTC m=+153.877460961" watchObservedRunningTime="2025-12-12 15:23:35.775628394 +0000 UTC m=+153.879537045" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.796127 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.796269 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.296245922 +0000 UTC m=+154.400154563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.796386 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.796724 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.296715214 +0000 UTC m=+154.400623855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.876395 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" podStartSLOduration=129.876369721 podStartE2EDuration="2m9.876369721s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:35.872548781 +0000 UTC m=+153.976457422" watchObservedRunningTime="2025-12-12 15:23:35.876369721 +0000 UTC m=+153.980278382" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.889960 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" podStartSLOduration=129.889936565 podStartE2EDuration="2m9.889936565s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:35.887538112 +0000 UTC m=+153.991446753" watchObservedRunningTime="2025-12-12 15:23:35.889936565 +0000 UTC m=+153.993845206" Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.897318 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.897583 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.397550583 +0000 UTC m=+154.501459224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.897713 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.898226 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.398209671 +0000 UTC m=+154.502118312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:35 crc kubenswrapper[5099]: I1212 15:23:35.998839 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:35 crc kubenswrapper[5099]: E1212 15:23:35.999115 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.499097951 +0000 UTC m=+154.603006592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.100924 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.101330 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.601316357 +0000 UTC m=+154.705224998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.150262 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:36 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:36 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:36 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.150330 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.201876 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.202088 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.702057964 +0000 UTC m=+154.805966605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.202339 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.202787 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.702771703 +0000 UTC m=+154.806680344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.303800 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.304175 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.804144506 +0000 UTC m=+154.908053147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.405548 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.405962 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:36.905948381 +0000 UTC m=+155.009857022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.515086 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.515398 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.015350984 +0000 UTC m=+155.119259625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.515604 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.516037 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.016017011 +0000 UTC m=+155.119925722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.618211 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.618611 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.118594476 +0000 UTC m=+155.222503117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.653507 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" event={"ID":"dafca5e1-81a4-4904-98bd-a054d15d3afd","Type":"ContainerStarted","Data":"d17e4cab44f8e6ba9ca8aa6533b150db4ae8ea36ef7edf0681f99e070773598b"} Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.655874 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" event={"ID":"18b0c64a-ef3e-458e-a852-c3bac6e0f3a6","Type":"ContainerStarted","Data":"d74c0b378cf215894e9fba0f4597509b35c0a249bb3feaea91011e4aced60a02"} Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.657814 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" event={"ID":"38193cbf-c891-4f9a-910f-2d7333064556","Type":"ContainerStarted","Data":"7b5c9effd85fec121ec611fd36c716b83c19591d6f2d794eb6e53b2ad51df27f"} Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.659786 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" event={"ID":"bc075443-c3a4-468b-a2db-32223eb9093b","Type":"ContainerStarted","Data":"63d0d9800ba7b3b9bee5113372a0f43e70d1a4d3cda8acc135228da5f4f78ced"} Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.661114 5099 generic.go:358] "Generic (PLEG): container finished" podID="f241e2a0-3e8f-45a4-805e-729b31ed6add" containerID="fce5df51b542ba3814052cdee12cf03b759e6de16a177c40332bdb514d06393d" exitCode=0 Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.661195 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" event={"ID":"f241e2a0-3e8f-45a4-805e-729b31ed6add","Type":"ContainerDied","Data":"fce5df51b542ba3814052cdee12cf03b759e6de16a177c40332bdb514d06393d"} Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.663129 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" event={"ID":"44e6a852-c3c4-48d1-b80b-fb3c0897af7c","Type":"ContainerStarted","Data":"158ecbd5456b23fa436fb7ed6779604368857005239d0c8fb9d2f8e8e5d8a0fa"} Dec 12 15:23:36 crc kubenswrapper[5099]: I1212 15:23:36.723023 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:36 crc kubenswrapper[5099]: E1212 15:23:36.723337 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.223324897 +0000 UTC m=+155.327233538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.007916 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.008442 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.508423892 +0000 UTC m=+155.612332533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.008588 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.009020 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.509012337 +0000 UTC m=+155.612920978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.110374 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.110603 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.610584236 +0000 UTC m=+155.714492877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.112767 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.113327 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.613292537 +0000 UTC m=+155.717201208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.123124 5099 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-gmbdh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.123176 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" podUID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.217296 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.217840 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.717813302 +0000 UTC m=+155.821721943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.258824 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:37 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:37 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:37 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.259107 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.396724 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.398609 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:37.898594286 +0000 UTC m=+156.002502917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.535011 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.535279 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:38.0352609 +0000 UTC m=+156.139169541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.535417 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.535476 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.823459 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.823927 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:38.323895087 +0000 UTC m=+156.427803738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.931763 5099 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-gmbdh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.932414 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" podUID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.932762 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.935025 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.935142 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:38.435112947 +0000 UTC m=+156.539021588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:37 crc kubenswrapper[5099]: I1212 15:23:37.937383 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:37 crc kubenswrapper[5099]: E1212 15:23:37.939303 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:38.439287156 +0000 UTC m=+156.543195797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:38 crc kubenswrapper[5099]: I1212 15:23:38.078063 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:38 crc kubenswrapper[5099]: E1212 15:23:38.078475 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:38.578454194 +0000 UTC m=+156.682362835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:38 crc kubenswrapper[5099]: I1212 15:23:38.184150 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:38 crc kubenswrapper[5099]: I1212 15:23:38.316348 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:38 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:38 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:38 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:38 crc kubenswrapper[5099]: I1212 15:23:38.983599 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:39 crc kubenswrapper[5099]: E1212 15:23:38.317636 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:38.817620721 +0000 UTC m=+156.921529362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.089216 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.090350 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:39 crc kubenswrapper[5099]: E1212 15:23:39.091297 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:40.091256185 +0000 UTC m=+158.195164826 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.252285 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:39 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:39 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:39 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.252429 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.257695 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:39 crc kubenswrapper[5099]: E1212 15:23:39.258110 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:39.758090656 +0000 UTC m=+157.861999307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.451105 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:39 crc kubenswrapper[5099]: E1212 15:23:39.451892 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:39.951877879 +0000 UTC m=+158.055786520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.585325 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:39 crc kubenswrapper[5099]: E1212 15:23:39.585809 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:40.085787841 +0000 UTC m=+158.189696482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.700453 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:39 crc kubenswrapper[5099]: E1212 15:23:39.700965 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:40.200947344 +0000 UTC m=+158.304855975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.931196 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.931252 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:39 crc kubenswrapper[5099]: I1212 15:23:39.977242 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:39 crc kubenswrapper[5099]: E1212 15:23:39.977820 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:40.477797114 +0000 UTC m=+158.581705755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.246548 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:40 crc kubenswrapper[5099]: E1212 15:23:40.246979 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:40.746952426 +0000 UTC m=+158.850861067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.349427 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:40 crc kubenswrapper[5099]: E1212 15:23:40.349895 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:40.849863656 +0000 UTC m=+158.953772297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.577577 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-xpdmq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.593046 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:40 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:40 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:40 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.593153 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.593455 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-xpdmq" podUID="aa68d84c-a712-4979-afe1-bdb4f8329372" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.591440 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:40 crc kubenswrapper[5099]: E1212 15:23:40.591883 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:41.091864937 +0000 UTC m=+159.195773578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.596016 5099 patch_prober.go:28] interesting pod/apiserver-8596bd845d-4lz9z container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.19:8443/livez\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.596075 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" podUID="900d9c74-c186-4e65-8d3d-5f282ced8617" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.19:8443/livez\": dial tcp 10.217.0.19:8443: connect: connection refused" Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.709573 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:40 crc kubenswrapper[5099]: E1212 15:23:40.720236 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:41.220174103 +0000 UTC m=+159.324082744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.799394 5099 ???:1] "http: TLS handshake error from 192.168.126.11:50088: no serving certificate available for the kubelet" Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.810403 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.810457 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.849705 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:40 crc kubenswrapper[5099]: E1212 15:23:40.850031 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:41.350019419 +0000 UTC m=+159.453928060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:40 crc kubenswrapper[5099]: I1212 15:23:40.950722 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:40 crc kubenswrapper[5099]: E1212 15:23:40.952069 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:41.452029319 +0000 UTC m=+159.555937960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.052435 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.053686 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:41.553644979 +0000 UTC m=+159.657553630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.115461 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dcr5l" podStartSLOduration=135.1154383 podStartE2EDuration="2m15.1154383s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:40.718882839 +0000 UTC m=+158.822791490" watchObservedRunningTime="2025-12-12 15:23:41.1154383 +0000 UTC m=+159.219346941" Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.183727 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:41 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:41 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:41 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.183757 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.183799 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.184250 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:41.684203253 +0000 UTC m=+159.788111894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.311088 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.311502 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:41.811486462 +0000 UTC m=+159.915395103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.419781 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.420196 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:41.920168527 +0000 UTC m=+160.024077168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.460333 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bjx2r" podStartSLOduration=135.460309433 podStartE2EDuration="2m15.460309433s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:41.125268706 +0000 UTC m=+159.229177367" watchObservedRunningTime="2025-12-12 15:23:41.460309433 +0000 UTC m=+159.564218084" Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.522902 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.523358 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.023344557 +0000 UTC m=+160.127253198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.621493 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-dwwwc" podStartSLOduration=135.621465426 podStartE2EDuration="2m15.621465426s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:41.457203912 +0000 UTC m=+159.561112563" watchObservedRunningTime="2025-12-12 15:23:41.621465426 +0000 UTC m=+159.725374067" Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.626532 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.626740 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.126696592 +0000 UTC m=+160.230605233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.627233 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.627641 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.127627557 +0000 UTC m=+160.231536198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.749161 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.749499 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.249469673 +0000 UTC m=+160.353378304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.749614 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.750163 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.25014789 +0000 UTC m=+160.354056521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.760877 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-k2mn9" podStartSLOduration=135.76086111 podStartE2EDuration="2m15.76086111s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:41.759456583 +0000 UTC m=+159.863365224" watchObservedRunningTime="2025-12-12 15:23:41.76086111 +0000 UTC m=+159.864769751" Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.877427 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:41 crc kubenswrapper[5099]: E1212 15:23:41.877816 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.377794099 +0000 UTC m=+160.481702740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:41 crc kubenswrapper[5099]: I1212 15:23:41.919647 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podStartSLOduration=24.91963055 podStartE2EDuration="24.91963055s" podCreationTimestamp="2025-12-12 15:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:41.912623357 +0000 UTC m=+160.016531998" watchObservedRunningTime="2025-12-12 15:23:41.91963055 +0000 UTC m=+160.023539191" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.005640 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:42 crc kubenswrapper[5099]: E1212 15:23:42.006082 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.506068564 +0000 UTC m=+160.609977205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.135287 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:42 crc kubenswrapper[5099]: E1212 15:23:42.135628 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.635590901 +0000 UTC m=+160.739499552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.183545 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:42 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:42 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:42 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.184020 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.262626 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:42 crc kubenswrapper[5099]: E1212 15:23:42.262926 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.762910181 +0000 UTC m=+160.866818822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.265321 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.281795 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.291000 5099 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-dmmt7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.291069 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" podUID="6555fce5-2fef-4e2f-b724-ef8eb4103b7a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.304257 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.304566 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.436503 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.436704 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.436759 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:42 crc kubenswrapper[5099]: E1212 15:23:42.437031 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:42.93699474 +0000 UTC m=+161.040903391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.445274 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.542704 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.542746 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.542793 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.543345 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:42 crc kubenswrapper[5099]: E1212 15:23:42.543578 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:43.043567039 +0000 UTC m=+161.147475680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.624513 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.657877 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:42 crc kubenswrapper[5099]: E1212 15:23:42.658167 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:43.158148847 +0000 UTC m=+161.262057488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.876120 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.882199 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fmnlp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.882256 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.883313 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:42 crc kubenswrapper[5099]: E1212 15:23:42.891022 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:43.390997279 +0000 UTC m=+161.494905920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:42 crc kubenswrapper[5099]: I1212 15:23:42.965984 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:42.985542 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tkw9f" event={"ID":"6c20a46e-1da7-4233-b89a-5029f754b132","Type":"ContainerStarted","Data":"f1e30e587802fd0b251cc066c76ede4ba4db53279fe84b2b2be5559e66578268"} Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:42.986359 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:42.991192 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" event={"ID":"f241e2a0-3e8f-45a4-805e-729b31ed6add","Type":"ContainerDied","Data":"c46b6726ec771e740a33b5d22b53091cb53e83d21ce85a4f35d48cd8bb5b1ecb"} Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:42.991222 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c46b6726ec771e740a33b5d22b53091cb53e83d21ce85a4f35d48cd8bb5b1ecb" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:42.991379 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-6fljb" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:42.995756 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" event={"ID":"44e6a852-c3c4-48d1-b80b-fb3c0897af7c","Type":"ContainerStarted","Data":"721882c73be6ff4f60746a05a5bcc29f25b25085dd8d23c827a1659d9349f113"} Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.016758 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.017122 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:43.517100688 +0000 UTC m=+161.621009329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.119044 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmdnp\" (UniqueName: \"kubernetes.io/projected/f241e2a0-3e8f-45a4-805e-729b31ed6add-kube-api-access-kmdnp\") pod \"f241e2a0-3e8f-45a4-805e-729b31ed6add\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.119164 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241e2a0-3e8f-45a4-805e-729b31ed6add-secret-volume\") pod \"f241e2a0-3e8f-45a4-805e-729b31ed6add\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.119484 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241e2a0-3e8f-45a4-805e-729b31ed6add-config-volume\") pod \"f241e2a0-3e8f-45a4-805e-729b31ed6add\" (UID: \"f241e2a0-3e8f-45a4-805e-729b31ed6add\") " Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.119737 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.123748 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:43.623725918 +0000 UTC m=+161.727634619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.131482 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f241e2a0-3e8f-45a4-805e-729b31ed6add-config-volume" (OuterVolumeSpecName: "config-volume") pod "f241e2a0-3e8f-45a4-805e-729b31ed6add" (UID: "f241e2a0-3e8f-45a4-805e-729b31ed6add"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.143151 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-5wjhf"] Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.143524 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" gracePeriod=30 Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.157781 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f241e2a0-3e8f-45a4-805e-729b31ed6add-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f241e2a0-3e8f-45a4-805e-729b31ed6add" (UID: "f241e2a0-3e8f-45a4-805e-729b31ed6add"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.163844 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f241e2a0-3e8f-45a4-805e-729b31ed6add-kube-api-access-kmdnp" (OuterVolumeSpecName: "kube-api-access-kmdnp") pod "f241e2a0-3e8f-45a4-805e-729b31ed6add" (UID: "f241e2a0-3e8f-45a4-805e-729b31ed6add"). InnerVolumeSpecName "kube-api-access-kmdnp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.168105 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:43 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:43 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:43 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.168165 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.220990 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.221611 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f241e2a0-3e8f-45a4-805e-729b31ed6add-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.221737 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kmdnp\" (UniqueName: \"kubernetes.io/projected/f241e2a0-3e8f-45a4-805e-729b31ed6add-kube-api-access-kmdnp\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.221876 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f241e2a0-3e8f-45a4-805e-729b31ed6add-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.222042 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:43.722021132 +0000 UTC m=+161.825929773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.326396 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.326955 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:43.826938288 +0000 UTC m=+161.930846929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.357687 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tkw9f" podStartSLOduration=26.357657289 podStartE2EDuration="26.357657289s" podCreationTimestamp="2025-12-12 15:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:43.33507469 +0000 UTC m=+161.438983331" watchObservedRunningTime="2025-12-12 15:23:43.357657289 +0000 UTC m=+161.461565940" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.358865 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.359353 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f241e2a0-3e8f-45a4-805e-729b31ed6add" containerName="collect-profiles" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.359369 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f241e2a0-3e8f-45a4-805e-729b31ed6add" containerName="collect-profiles" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.359452 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f241e2a0-3e8f-45a4-805e-729b31ed6add" containerName="collect-profiles" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.428399 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.430978 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:43.93095289 +0000 UTC m=+162.034861531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.450889 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-zkw7z container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.450984 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" podUID="1814fed8-acf1-4395-86f9-24219f084d55" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.531694 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.532374 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.032360614 +0000 UTC m=+162.136269255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.555331 5099 ???:1] "http: TLS handshake error from 192.168.126.11:50098: no serving certificate available for the kubelet" Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.592840 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf241e2a0_3e8f_45a4_805e_729b31ed6add.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf241e2a0_3e8f_45a4_805e_729b31ed6add.slice/crio-c46b6726ec771e740a33b5d22b53091cb53e83d21ce85a4f35d48cd8bb5b1ecb\": RecentStats: unable to find data in memory cache]" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.684758 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.685023 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.721715 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.721935 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.221895837 +0000 UTC m=+162.325804478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.722209 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.722572 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.222558824 +0000 UTC m=+162.326467465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.727578 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.728315 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.872602 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.873458 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.373210413 +0000 UTC m=+162.477119054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.873633 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/358fc99d-f65f-48d7-97bc-61e86bb73c59-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"358fc99d-f65f-48d7-97bc-61e86bb73c59\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.873739 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.873794 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/358fc99d-f65f-48d7-97bc-61e86bb73c59-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"358fc99d-f65f-48d7-97bc-61e86bb73c59\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.874156 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.374146287 +0000 UTC m=+162.478054928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.883749 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" podStartSLOduration=137.883732117 podStartE2EDuration="2m17.883732117s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:43.417125809 +0000 UTC m=+161.521034450" watchObservedRunningTime="2025-12-12 15:23:43.883732117 +0000 UTC m=+161.987640758" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.974837 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.975223 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/358fc99d-f65f-48d7-97bc-61e86bb73c59-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"358fc99d-f65f-48d7-97bc-61e86bb73c59\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.975316 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/358fc99d-f65f-48d7-97bc-61e86bb73c59-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"358fc99d-f65f-48d7-97bc-61e86bb73c59\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:43 crc kubenswrapper[5099]: E1212 15:23:43.976279 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.47625166 +0000 UTC m=+162.580160311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.976346 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/358fc99d-f65f-48d7-97bc-61e86bb73c59-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"358fc99d-f65f-48d7-97bc-61e86bb73c59\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:43 crc kubenswrapper[5099]: I1212 15:23:43.978264 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-dmmt7" Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.023953 5099 patch_prober.go:28] interesting pod/dns-default-tkw9f container/dns namespace/openshift-dns: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=kubernetes Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.024065 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-tkw9f" podUID="6c20a46e-1da7-4233-b89a-5029f754b132" containerName="dns" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.084455 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:44 crc kubenswrapper[5099]: E1212 15:23:44.086254 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.586231478 +0000 UTC m=+162.690140119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.204728 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:44 crc kubenswrapper[5099]: E1212 15:23:44.205527 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.705500138 +0000 UTC m=+162.809408779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.206496 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/358fc99d-f65f-48d7-97bc-61e86bb73c59-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"358fc99d-f65f-48d7-97bc-61e86bb73c59\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.307242 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:44 crc kubenswrapper[5099]: E1212 15:23:44.307892 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.807878148 +0000 UTC m=+162.911786779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.473026 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:44 crc kubenswrapper[5099]: E1212 15:23:44.473440 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:44.973421105 +0000 UTC m=+163.077329746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.491522 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.644010 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:44 crc kubenswrapper[5099]: E1212 15:23:44.644411 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:45.144392253 +0000 UTC m=+163.248300894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.748243 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:44 crc kubenswrapper[5099]: E1212 15:23:44.748413 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:45.248377884 +0000 UTC m=+163.352286525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.748871 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:44 crc kubenswrapper[5099]: E1212 15:23:44.749369 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:45.24935542 +0000 UTC m=+163.353264061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.755902 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:44 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:44 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:44 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:44 crc kubenswrapper[5099]: I1212 15:23:44.756001 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:44.987632 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:44.996948 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:44.997552 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:45.497533562 +0000 UTC m=+163.601442203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.019066 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.085324 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tkw9f" Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.105109 5099 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-9d2g2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.24:8443/livez\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.105201 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" podUID="44e6a852-c3c4-48d1-b80b-fb3c0897af7c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.24:8443/livez\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.109973 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.110391 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:45.610373024 +0000 UTC m=+163.714281665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.211924 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:45 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:45 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:45 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.212011 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.212351 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.212617 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:45.71260007 +0000 UTC m=+163.816508711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.320790 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.321357 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:45.821331874 +0000 UTC m=+163.925240515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.421716 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.422122 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:45.922096742 +0000 UTC m=+164.026005383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.523046 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.523462 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.023448455 +0000 UTC m=+164.127357106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.656618 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.656972 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.156951836 +0000 UTC m=+164.260860467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.763055 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.763516 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.263501225 +0000 UTC m=+164.367409876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.788452 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.806906 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-wn5sv" Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.883355 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.890214 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.390186858 +0000 UTC m=+164.494095499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:45 crc kubenswrapper[5099]: I1212 15:23:45.987297 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:45 crc kubenswrapper[5099]: E1212 15:23:45.987775 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.487754272 +0000 UTC m=+164.591662913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.088329 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:46 crc kubenswrapper[5099]: E1212 15:23:46.088888 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.588864429 +0000 UTC m=+164.692773070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.150003 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:46 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:46 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:46 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.150090 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.219173 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:46 crc kubenswrapper[5099]: E1212 15:23:46.222504 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.722486034 +0000 UTC m=+164.826394765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.282087 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.321310 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:46 crc kubenswrapper[5099]: E1212 15:23:46.321728 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.821707581 +0000 UTC m=+164.925616222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.426518 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:46 crc kubenswrapper[5099]: E1212 15:23:46.427416 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:46.927393307 +0000 UTC m=+165.031301948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.594259 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:46 crc kubenswrapper[5099]: E1212 15:23:46.594557 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:47.094535585 +0000 UTC m=+165.198444236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.658797 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-zkw7z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.658895 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" podUID="1814fed8-acf1-4395-86f9-24219f084d55" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.697630 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:46 crc kubenswrapper[5099]: E1212 15:23:46.698082 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:47.198066815 +0000 UTC m=+165.301975456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.799319 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:46 crc kubenswrapper[5099]: E1212 15:23:46.799710 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:47.299644964 +0000 UTC m=+165.403553605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.832756 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.875205 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4lz9z" Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.900606 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:46 crc kubenswrapper[5099]: E1212 15:23:46.901225 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:47.401212283 +0000 UTC m=+165.505120924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:46 crc kubenswrapper[5099]: I1212 15:23:46.992181 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.017148 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:47 crc kubenswrapper[5099]: E1212 15:23:47.018153 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:47.518136642 +0000 UTC m=+165.622045283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.119059 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:47 crc kubenswrapper[5099]: E1212 15:23:47.119453 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:47.619438963 +0000 UTC m=+165.723347604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.369008 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:47 crc kubenswrapper[5099]: E1212 15:23:47.369539 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:47.869517155 +0000 UTC m=+165.973425796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.373921 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f","Type":"ContainerStarted","Data":"a2f4b9ba312a31228dcb44b4eec73ef55a6a2a490bfe5fe86e40a7fe731d46d7"} Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.383949 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:47 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:47 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:47 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.384077 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.401374 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xmkwr"] Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.410543 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.410606 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.418300 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmkwr"] Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.418371 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"358fc99d-f65f-48d7-97bc-61e86bb73c59","Type":"ContainerStarted","Data":"da83e0b2abda99b5dadaf45cee58a8952aff3ca779234cdd6c65e93360269193"} Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.418614 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.425829 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.485047 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-utilities\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.485129 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.485206 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq6gq\" (UniqueName: \"kubernetes.io/projected/50cbce4d-a234-4a2e-b683-8ecf21d93474-kube-api-access-jq6gq\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.485236 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-catalog-content\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: E1212 15:23:47.485874 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:47.985861699 +0000 UTC m=+166.089770340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.586214 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zqx68"] Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.588434 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.588577 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-utilities\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.588651 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jq6gq\" (UniqueName: \"kubernetes.io/projected/50cbce4d-a234-4a2e-b683-8ecf21d93474-kube-api-access-jq6gq\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.588701 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-catalog-content\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.589286 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-catalog-content\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: E1212 15:23:47.589358 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:48.089340487 +0000 UTC m=+166.193249128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.589878 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-utilities\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.616102 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqx68"] Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.616314 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.627257 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.663816 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq6gq\" (UniqueName: \"kubernetes.io/projected/50cbce4d-a234-4a2e-b683-8ecf21d93474-kube-api-access-jq6gq\") pod \"certified-operators-xmkwr\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.689875 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6whh\" (UniqueName: \"kubernetes.io/projected/7becc184-0a0c-4a25-919f-6359f1da964e-kube-api-access-b6whh\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.689973 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.690005 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-utilities\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.690220 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-catalog-content\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: E1212 15:23:47.690652 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:48.190632398 +0000 UTC m=+166.294541029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.796024 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.796306 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b6whh\" (UniqueName: \"kubernetes.io/projected/7becc184-0a0c-4a25-919f-6359f1da964e-kube-api-access-b6whh\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.796366 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-utilities\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.796383 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-catalog-content\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.796985 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-catalog-content\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: E1212 15:23:47.797075 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:48.297057184 +0000 UTC m=+166.400965825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.797752 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-utilities\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.805985 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.852787 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v8tnw"] Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.876037 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.929494 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9znx\" (UniqueName: \"kubernetes.io/projected/a128b003-3b72-492a-af4a-a15e2f4f1c7a-kube-api-access-c9znx\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.929846 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:47 crc kubenswrapper[5099]: E1212 15:23:47.930321 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:48.430300608 +0000 UTC m=+166.534209289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.930657 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-catalog-content\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.930818 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-utilities\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:47 crc kubenswrapper[5099]: I1212 15:23:47.953021 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.039610 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.040063 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-catalog-content\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.040105 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-utilities\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.040141 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c9znx\" (UniqueName: \"kubernetes.io/projected/a128b003-3b72-492a-af4a-a15e2f4f1c7a-kube-api-access-c9znx\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:48 crc kubenswrapper[5099]: E1212 15:23:48.040371 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:48.540335338 +0000 UTC m=+166.644243989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.041029 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-catalog-content\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.041297 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-utilities\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.053287 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6whh\" (UniqueName: \"kubernetes.io/projected/7becc184-0a0c-4a25-919f-6359f1da964e-kube-api-access-b6whh\") pod \"community-operators-zqx68\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.150204 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:48 crc kubenswrapper[5099]: E1212 15:23:48.152153 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:48.652132353 +0000 UTC m=+166.756041054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.152828 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:48 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:48 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:48 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.152885 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.182093 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v8tnw"] Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.189795 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9znx\" (UniqueName: \"kubernetes.io/projected/a128b003-3b72-492a-af4a-a15e2f4f1c7a-kube-api-access-c9znx\") pod \"certified-operators-v8tnw\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.453229 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.453440 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:48 crc kubenswrapper[5099]: E1212 15:23:48.453893 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:48.953873797 +0000 UTC m=+167.057782438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.454237 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.514008 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tvbzr"] Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.539376 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f","Type":"ContainerStarted","Data":"f90b3baac077d45f04b3fc3f9dbd4427b32149e2ce25735cf6bac2a261cab7f7"} Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.539431 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h686" event={"ID":"3adb4515-a2d2-4849-a626-81443f61d9d2","Type":"ContainerStarted","Data":"ca72a29fe8c131ca9109188a86fe98c61810d03c2650c28851a58ab0f9e8f974"} Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.539502 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.565028 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:48 crc kubenswrapper[5099]: E1212 15:23:48.565633 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:49.065610105 +0000 UTC m=+167.169518746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.570274 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tvbzr"] Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.850504 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.850748 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-utilities\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.850800 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-catalog-content\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.850864 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtqz\" (UniqueName: \"kubernetes.io/projected/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-kube-api-access-ggtqz\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: E1212 15:23:48.850990 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:49.350971596 +0000 UTC m=+167.454880237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.956224 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ggtqz\" (UniqueName: \"kubernetes.io/projected/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-kube-api-access-ggtqz\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.956307 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-utilities\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.956369 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.956466 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-catalog-content\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.957270 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-catalog-content\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.958065 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-utilities\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:48 crc kubenswrapper[5099]: E1212 15:23:48.958411 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:49.458394907 +0000 UTC m=+167.562303548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:48 crc kubenswrapper[5099]: I1212 15:23:48.998332 5099 ???:1] "http: TLS handshake error from 192.168.126.11:50102: no serving certificate available for the kubelet" Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.058864 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.064536 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggtqz\" (UniqueName: \"kubernetes.io/projected/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-kube-api-access-ggtqz\") pod \"community-operators-tvbzr\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.065928 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:49.5658967 +0000 UTC m=+167.669805341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.068616 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.069129 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:49.569113224 +0000 UTC m=+167.673021865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.116632 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.144072 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.158000 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.158882 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.158941 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.171881 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.172244 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:49.672224493 +0000 UTC m=+167.776133134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.179401 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:49 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:49 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:49 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.179493 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.274559 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.275052 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:49.775039024 +0000 UTC m=+167.878947665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.382362 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.382822 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:49.882804954 +0000 UTC m=+167.986713595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.499990 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.500573 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.000555865 +0000 UTC m=+168.104464506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.503647 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sfqhr"] Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.602638 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.603314 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.103296934 +0000 UTC m=+168.207205575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.705604 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.706006 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.205991802 +0000 UTC m=+168.309900443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: W1212 15:23:49.726631 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50cbce4d_a234_4a2e_b683_8ecf21d93474.slice/crio-6a9785249a7fc266214a8c4aa71befe83ab5a307c37643391ef2a478ad5a3486 WatchSource:0}: Error finding container 6a9785249a7fc266214a8c4aa71befe83ab5a307c37643391ef2a478ad5a3486: Status 404 returned error can't find the container with id 6a9785249a7fc266214a8c4aa71befe83ab5a307c37643391ef2a478ad5a3486 Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.809196 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.809557 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.309540372 +0000 UTC m=+168.413449013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.851019 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.851098 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:49 crc kubenswrapper[5099]: I1212 15:23:49.911434 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:49 crc kubenswrapper[5099]: E1212 15:23:49.911923 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.411908822 +0000 UTC m=+168.515817463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.013132 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.013388 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.513355477 +0000 UTC m=+168.617264118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.013721 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.014093 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.514079316 +0000 UTC m=+168.617987947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: W1212 15:23:50.070086 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87e8cc75_22ad_4cd9_afc2_0da7c49fe9ae.slice/crio-3d5ec0a1179cc21951c11aa9d8ee99310bab2b0c55516a12a71f1e79bafcd5c9 WatchSource:0}: Error finding container 3d5ec0a1179cc21951c11aa9d8ee99310bab2b0c55516a12a71f1e79bafcd5c9: Status 404 returned error can't find the container with id 3d5ec0a1179cc21951c11aa9d8ee99310bab2b0c55516a12a71f1e79bafcd5c9 Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.115486 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.115829 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.615775148 +0000 UTC m=+168.719683779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.153034 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:50 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:50 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:50 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.153138 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.216601 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.216644 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"358fc99d-f65f-48d7-97bc-61e86bb73c59","Type":"ContainerStarted","Data":"32d59a5006ea948fb9a2e2650afc0983533be16ec64fa91a8b9c0f8c38ca9edb"} Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.216781 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-48xth"] Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.216827 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfqhr"] Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.216839 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86"] Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.216831 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.216856 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmkwr"] Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.217055 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8qwhh"] Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.217343 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.217702 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.717689616 +0000 UTC m=+168.821598257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.220627 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.319159 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.319348 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.819315376 +0000 UTC m=+168.923224017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.319507 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-catalog-content\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.319649 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-utilities\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.319736 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.319832 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqgn\" (UniqueName: \"kubernetes.io/projected/91162a66-bdaa-4786-ad25-bde12241ebae-kube-api-access-gpqgn\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.320067 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.820053845 +0000 UTC m=+168.923962486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.421043 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.421219 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.921195813 +0000 UTC m=+169.025104454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.421317 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-catalog-content\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.421372 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-utilities\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.421424 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.421472 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqgn\" (UniqueName: \"kubernetes.io/projected/91162a66-bdaa-4786-ad25-bde12241ebae-kube-api-access-gpqgn\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.421847 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:50.921835899 +0000 UTC m=+169.025744540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.422099 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-utilities\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.422347 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-catalog-content\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.444553 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqgn\" (UniqueName: \"kubernetes.io/projected/91162a66-bdaa-4786-ad25-bde12241ebae-kube-api-access-gpqgn\") pod \"redhat-marketplace-sfqhr\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.523010 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.523377 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.023305686 +0000 UTC m=+169.127214327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.523718 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.524063 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.024046455 +0000 UTC m=+169.127955086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.534987 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.562446 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-xpdmq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.562513 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-xpdmq" podUID="aa68d84c-a712-4979-afe1-bdb4f8329372" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.624794 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.625031 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.124992497 +0000 UTC m=+169.228901138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.625281 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.625659 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.125642264 +0000 UTC m=+169.229550905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.726386 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.726903 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.226866844 +0000 UTC m=+169.330775485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:50 crc kubenswrapper[5099]: I1212 15:23:50.872142 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:50 crc kubenswrapper[5099]: E1212 15:23:50.872658 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.372635495 +0000 UTC m=+169.476544146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.012825 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.013645 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.513619941 +0000 UTC m=+169.617528582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.123603 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.123998 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.623974249 +0000 UTC m=+169.727882890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.152591 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:51 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:51 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:51 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.152684 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.225149 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.225430 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.725412165 +0000 UTC m=+169.829320806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.326581 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.326982 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.826964643 +0000 UTC m=+169.930873284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.428927 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.429098 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.929072585 +0000 UTC m=+170.032981226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.429448 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.429798 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:51.929790614 +0000 UTC m=+170.033699255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.528888 5099 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-9d2g2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]log ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]etcd ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/max-in-flight-filter ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 12 15:23:51 crc kubenswrapper[5099]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 12 15:23:51 crc kubenswrapper[5099]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/project.openshift.io-projectcache ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/openshift.io-startinformers ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 12 15:23:51 crc kubenswrapper[5099]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 15:23:51 crc kubenswrapper[5099]: livez check failed Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.528982 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" podUID="44e6a852-c3c4-48d1-b80b-fb3c0897af7c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.531091 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.531265 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.031239749 +0000 UTC m=+170.135148390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.531587 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.532083 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.032068471 +0000 UTC m=+170.135977112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.591383 5099 generic.go:358] "Generic (PLEG): container finished" podID="4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f" containerID="f90b3baac077d45f04b3fc3f9dbd4427b32149e2ce25735cf6bac2a261cab7f7" exitCode=0 Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.632812 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.633155 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.133139577 +0000 UTC m=+170.237048218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.737013 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.737739 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.237714924 +0000 UTC m=+170.341623565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.839033 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.839178 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.339152149 +0000 UTC m=+170.443060790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.839641 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.840005 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.339989871 +0000 UTC m=+170.443898542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.940657 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.940910 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.440879011 +0000 UTC m=+170.544787652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:51 crc kubenswrapper[5099]: I1212 15:23:51.941335 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:51 crc kubenswrapper[5099]: E1212 15:23:51.941733 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.441715463 +0000 UTC m=+170.545624104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.042245 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.042429 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.542405029 +0000 UTC m=+170.646313670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.042489 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.042840 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.54283149 +0000 UTC m=+170.646740121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.143967 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.144114 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.644091431 +0000 UTC m=+170.748000072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.144260 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.144577 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.644568723 +0000 UTC m=+170.748477354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.149987 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:52 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:52 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:52 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.150031 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.281660 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.281764 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.78174499 +0000 UTC m=+170.885653631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.281838 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.283043 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.783026194 +0000 UTC m=+170.886934835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.319839 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" podUID="1c0b0461-cdc7-4453-9766-eb3dc5385423" containerName="route-controller-manager" containerID="cri-o://91c9268029a6bdf10aa982e39b6977f79804db02bd518e8f3a97420bdea09a9d" gracePeriod=30 Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.320344 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerName="controller-manager" containerID="cri-o://d72c88520bcf5ca60912d771527488a0fbbc6bc282947bc0fec93bdd220e95c2" gracePeriod=30 Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.320459 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"5cd9a0aab942a07b222395a3c34cd201537845f7a0fd7f5c8d7aa1a3f657f4b0"} pod="openshift-console/downloads-747b44746d-87m2r" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.320540 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" containerID="cri-o://5cd9a0aab942a07b222395a3c34cd201537845f7a0fd7f5c8d7aa1a3f657f4b0" gracePeriod=2 Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.320606 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.320709 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.349018 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=10.349000214 podStartE2EDuration="10.349000214s" podCreationTimestamp="2025-12-12 15:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:52.343548222 +0000 UTC m=+170.447456873" watchObservedRunningTime="2025-12-12 15:23:52.349000214 +0000 UTC m=+170.452908855" Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.385634 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.385848 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.885818494 +0000 UTC m=+170.989727155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.486962 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.487446 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:52.987425643 +0000 UTC m=+171.091334284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.588207 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.588583 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.08856022 +0000 UTC m=+171.192468861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.689596 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.690149 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.190134579 +0000 UTC m=+171.294043220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.790401 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.790557 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.290536917 +0000 UTC m=+171.394445558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:52 crc kubenswrapper[5099]: I1212 15:23:52.791048 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:52 crc kubenswrapper[5099]: E1212 15:23:52.791348 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.291340398 +0000 UTC m=+171.395249039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.042298 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.042677 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.542645701 +0000 UTC m=+171.646554342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.144336 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.145239 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.645216916 +0000 UTC m=+171.749125557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.151756 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:53 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:53 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:53 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.151830 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.245746 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.245942 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.745906772 +0000 UTC m=+171.849815413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.246239 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.246765 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.746743464 +0000 UTC m=+171.850652135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.348501 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.348803 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.848742073 +0000 UTC m=+171.952650714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.349131 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.350308 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.850280813 +0000 UTC m=+171.954189474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.449969 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.450247 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.950206259 +0000 UTC m=+172.054114910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.450611 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.451011 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:53.95099421 +0000 UTC m=+172.054902851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.551826 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.552078 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.052045745 +0000 UTC m=+172.155954386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.552343 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.552762 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.052745583 +0000 UTC m=+172.156654284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.774095 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.774274 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.274248945 +0000 UTC m=+172.378157576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.774483 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.774807 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.274799419 +0000 UTC m=+172.378708060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.875118 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.875277 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.375230573 +0000 UTC m=+172.479139224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.875687 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.876214 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.376179777 +0000 UTC m=+172.480088418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.929557 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qwhh"] Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.929615 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqx68"] Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.930140 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.938854 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v8tnw"] Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.938899 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tvbzr"] Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.938962 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.938987 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8tnw" event={"ID":"a128b003-3b72-492a-af4a-a15e2f4f1c7a","Type":"ContainerStarted","Data":"1d7a719447cc857009696b5c4ab873bdebfa8cf69201eafbe0f45070aa0a3f64"} Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.939004 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nx984"] Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.976407 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.976711 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt97t\" (UniqueName: \"kubernetes.io/projected/4a8a0d08-daaa-416f-b5b9-78c49ab92283-kube-api-access-vt97t\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.976797 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-catalog-content\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.976854 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-utilities\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:53 crc kubenswrapper[5099]: E1212 15:23:53.977000 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.476980656 +0000 UTC m=+172.580889297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:53 crc kubenswrapper[5099]: I1212 15:23:53.987490 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=10.987458539 podStartE2EDuration="10.987458539s" podCreationTimestamp="2025-12-12 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:53.983619499 +0000 UTC m=+172.087528160" watchObservedRunningTime="2025-12-12 15:23:53.987458539 +0000 UTC m=+172.091367190" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.070119 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx984"] Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.070234 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmkwr" event={"ID":"50cbce4d-a234-4a2e-b683-8ecf21d93474","Type":"ContainerStarted","Data":"6a9785249a7fc266214a8c4aa71befe83ab5a307c37643391ef2a478ad5a3486"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.070539 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbzr" event={"ID":"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae","Type":"ContainerStarted","Data":"3d5ec0a1179cc21951c11aa9d8ee99310bab2b0c55516a12a71f1e79bafcd5c9"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.070571 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8rxkm"] Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.071775 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.076324 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.078203 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-catalog-content\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.078268 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prddd\" (UniqueName: \"kubernetes.io/projected/0ec8ac77-8f80-4a46-b769-37952a91485c-kube-api-access-prddd\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.078328 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-utilities\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.078411 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.078469 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-utilities\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.078494 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vt97t\" (UniqueName: \"kubernetes.io/projected/4a8a0d08-daaa-416f-b5b9-78c49ab92283-kube-api-access-vt97t\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.078524 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-catalog-content\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.079055 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.579038637 +0000 UTC m=+172.682947278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.079125 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-utilities\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.079373 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-catalog-content\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.107256 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt97t\" (UniqueName: \"kubernetes.io/projected/4a8a0d08-daaa-416f-b5b9-78c49ab92283-kube-api-access-vt97t\") pod \"redhat-marketplace-8qwhh\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.115261 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqx68" event={"ID":"7becc184-0a0c-4a25-919f-6359f1da964e","Type":"ContainerStarted","Data":"7fd1f7f605d1c63a57f09b24e9def452ba73688b27c0ccfa031008393b950951"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.115317 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f","Type":"ContainerDied","Data":"f90b3baac077d45f04b3fc3f9dbd4427b32149e2ce25735cf6bac2a261cab7f7"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.115339 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rxkm"] Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.115354 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfqhr" event={"ID":"91162a66-bdaa-4786-ad25-bde12241ebae","Type":"ContainerStarted","Data":"58cf2f2ac667ee9b915837197f6a51bacbd619d0bc1e8996ab6efa2bc16995d7"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.115369 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfqhr"] Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.115559 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.150453 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:54 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:54 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:54 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.150551 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.167546 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42d147e5_750e_4c46_bb7a_e99a34fca2f9.slice/crio-conmon-5cd9a0aab942a07b222395a3c34cd201537845f7a0fd7f5c8d7aa1a3f657f4b0.scope\": RecentStats: unable to find data in memory cache]" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.179283 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.179498 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74fjj\" (UniqueName: \"kubernetes.io/projected/b9acc6f5-a7ea-4640-b35d-014f831b262d-kube-api-access-74fjj\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.179593 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.679557269 +0000 UTC m=+172.783465910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.179760 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-utilities\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.179867 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-catalog-content\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.179957 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prddd\" (UniqueName: \"kubernetes.io/projected/0ec8ac77-8f80-4a46-b769-37952a91485c-kube-api-access-prddd\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.180112 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-catalog-content\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.180161 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-utilities\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.180522 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-utilities\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.181009 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-catalog-content\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.200361 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prddd\" (UniqueName: \"kubernetes.io/projected/0ec8ac77-8f80-4a46-b769-37952a91485c-kube-api-access-prddd\") pod \"redhat-operators-nx984\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.234135 5099 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-22f86 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.234205 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" podUID="1c0b0461-cdc7-4453-9766-eb3dc5385423" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.248648 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.281380 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-catalog-content\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.281583 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-utilities\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.281791 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.281822 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-74fjj\" (UniqueName: \"kubernetes.io/projected/b9acc6f5-a7ea-4640-b35d-014f831b262d-kube-api-access-74fjj\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.282173 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-catalog-content\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.282208 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-utilities\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.282250 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.782235996 +0000 UTC m=+172.886144637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.299497 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-74fjj\" (UniqueName: \"kubernetes.io/projected/b9acc6f5-a7ea-4640-b35d-014f831b262d-kube-api-access-74fjj\") pod \"redhat-operators-8rxkm\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.340530 5099 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.382770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.382995 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.882961403 +0000 UTC m=+172.986870044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.484020 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.485006 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:54.984983733 +0000 UTC m=+173.088892374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.501177 5099 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T15:23:54.340568997Z","UUID":"a5ba7054-1233-49ca-88f5-804ee8f0f469","Handler":null,"Name":"","Endpoint":""} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.530145 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qwhh"] Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.534077 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.573568 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.585756 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.585976 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:55.085937836 +0000 UTC m=+173.189846477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.586652 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.587024 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:55.087004354 +0000 UTC m=+173.190912995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.688432 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.688614 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:55.188562112 +0000 UTC m=+173.292470753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.688758 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.689069 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:55.189056305 +0000 UTC m=+173.292964946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.748256 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx984"] Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.789550 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.789698 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:23:55.289673579 +0000 UTC m=+173.393582220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.790139 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.790372 5099 generic.go:358] "Generic (PLEG): container finished" podID="1c0b0461-cdc7-4453-9766-eb3dc5385423" containerID="91c9268029a6bdf10aa982e39b6977f79804db02bd518e8f3a97420bdea09a9d" exitCode=0 Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.790461 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" event={"ID":"1c0b0461-cdc7-4453-9766-eb3dc5385423","Type":"ContainerDied","Data":"91c9268029a6bdf10aa982e39b6977f79804db02bd518e8f3a97420bdea09a9d"} Dec 12 15:23:54 crc kubenswrapper[5099]: E1212 15:23:54.790520 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:23:55.290506351 +0000 UTC m=+173.394414992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-g59fk" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.792400 5099 generic.go:358] "Generic (PLEG): container finished" podID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerID="d72c88520bcf5ca60912d771527488a0fbbc6bc282947bc0fec93bdd220e95c2" exitCode=0 Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.792486 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" event={"ID":"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae","Type":"ContainerDied","Data":"d72c88520bcf5ca60912d771527488a0fbbc6bc282947bc0fec93bdd220e95c2"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.794740 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h686" event={"ID":"3adb4515-a2d2-4849-a626-81443f61d9d2","Type":"ContainerStarted","Data":"2c9e24bd0f5c95468adc3622c5b963a052a82c21c8bfbf2336308b02ff9a4815"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.796468 5099 generic.go:358] "Generic (PLEG): container finished" podID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerID="5cd9a0aab942a07b222395a3c34cd201537845f7a0fd7f5c8d7aa1a3f657f4b0" exitCode=0 Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.796545 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-87m2r" event={"ID":"42d147e5-750e-4c46-bb7a-e99a34fca2f9","Type":"ContainerDied","Data":"5cd9a0aab942a07b222395a3c34cd201537845f7a0fd7f5c8d7aa1a3f657f4b0"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.798176 5099 generic.go:358] "Generic (PLEG): container finished" podID="358fc99d-f65f-48d7-97bc-61e86bb73c59" containerID="32d59a5006ea948fb9a2e2650afc0983533be16ec64fa91a8b9c0f8c38ca9edb" exitCode=0 Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.798201 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"358fc99d-f65f-48d7-97bc-61e86bb73c59","Type":"ContainerDied","Data":"32d59a5006ea948fb9a2e2650afc0983533be16ec64fa91a8b9c0f8c38ca9edb"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.799746 5099 generic.go:358] "Generic (PLEG): container finished" podID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerID="6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577" exitCode=0 Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.799836 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmkwr" event={"ID":"50cbce4d-a234-4a2e-b683-8ecf21d93474","Type":"ContainerDied","Data":"6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577"} Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.817612 5099 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.817699 5099 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 15:23:54 crc kubenswrapper[5099]: W1212 15:23:54.825199 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec8ac77_8f80_4a46_b769_37952a91485c.slice/crio-50b2e2cd38c0ea5ca00fe360b91959c8672d54653cf80ccfd9ab5d64c7ed9dc5 WatchSource:0}: Error finding container 50b2e2cd38c0ea5ca00fe360b91959c8672d54653cf80ccfd9ab5d64c7ed9dc5: Status 404 returned error can't find the container with id 50b2e2cd38c0ea5ca00fe360b91959c8672d54653cf80ccfd9ab5d64c7ed9dc5 Dec 12 15:23:54 crc kubenswrapper[5099]: W1212 15:23:54.833038 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a8a0d08_daaa_416f_b5b9_78c49ab92283.slice/crio-10a51897c939da746fc4706c6ef88ce630a4e0c621ad5d0c83c3baa591d55ebf WatchSource:0}: Error finding container 10a51897c939da746fc4706c6ef88ce630a4e0c621ad5d0c83c3baa591d55ebf: Status 404 returned error can't find the container with id 10a51897c939da746fc4706c6ef88ce630a4e0c621ad5d0c83c3baa591d55ebf Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.867701 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.891195 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.896363 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.993122 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.996447 5099 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 15:23:54 crc kubenswrapper[5099]: I1212 15:23:54.996500 5099 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.065500 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rxkm"] Dec 12 15:23:55 crc kubenswrapper[5099]: W1212 15:23:55.072261 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9acc6f5_a7ea_4640_b35d_014f831b262d.slice/crio-1417f5316fb8c887cafd413dd6fbe56a849ca5660ef3b4a2180d35d0e8dc77b0 WatchSource:0}: Error finding container 1417f5316fb8c887cafd413dd6fbe56a849ca5660ef3b4a2180d35d0e8dc77b0: Status 404 returned error can't find the container with id 1417f5316fb8c887cafd413dd6fbe56a849ca5660ef3b4a2180d35d0e8dc77b0 Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.150386 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:55 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:55 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:55 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.150486 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.351881 5099 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-48xth container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.351973 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.381701 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-g59fk\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.412656 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-9d2g2" Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.567815 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.665418 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zkw7z" Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.811020 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx984" event={"ID":"0ec8ac77-8f80-4a46-b769-37952a91485c","Type":"ContainerStarted","Data":"50b2e2cd38c0ea5ca00fe360b91959c8672d54653cf80ccfd9ab5d64c7ed9dc5"} Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.812454 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qwhh" event={"ID":"4a8a0d08-daaa-416f-b5b9-78c49ab92283","Type":"ContainerStarted","Data":"10a51897c939da746fc4706c6ef88ce630a4e0c621ad5d0c83c3baa591d55ebf"} Dec 12 15:23:55 crc kubenswrapper[5099]: I1212 15:23:55.813590 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rxkm" event={"ID":"b9acc6f5-a7ea-4640-b35d-014f831b262d","Type":"ContainerStarted","Data":"1417f5316fb8c887cafd413dd6fbe56a849ca5660ef3b4a2180d35d0e8dc77b0"} Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.083477 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.112261 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kube-api-access\") pod \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\" (UID: \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.112320 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kubelet-dir\") pod \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\" (UID: \"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.113900 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f" (UID: "4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.152330 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:56 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:56 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:56 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.152407 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.175292 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f" (UID: "4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.198833 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.219430 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/358fc99d-f65f-48d7-97bc-61e86bb73c59-kube-api-access\") pod \"358fc99d-f65f-48d7-97bc-61e86bb73c59\" (UID: \"358fc99d-f65f-48d7-97bc-61e86bb73c59\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.219486 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/358fc99d-f65f-48d7-97bc-61e86bb73c59-kubelet-dir\") pod \"358fc99d-f65f-48d7-97bc-61e86bb73c59\" (UID: \"358fc99d-f65f-48d7-97bc-61e86bb73c59\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.219725 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.219743 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.219805 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358fc99d-f65f-48d7-97bc-61e86bb73c59-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "358fc99d-f65f-48d7-97bc-61e86bb73c59" (UID: "358fc99d-f65f-48d7-97bc-61e86bb73c59"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.224229 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-g59fk"] Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.230567 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/358fc99d-f65f-48d7-97bc-61e86bb73c59-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "358fc99d-f65f-48d7-97bc-61e86bb73c59" (UID: "358fc99d-f65f-48d7-97bc-61e86bb73c59"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: W1212 15:23:56.243777 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8221fb7_b435_4a06_8a6d_7bcc4afda383.slice/crio-7606fb3ee7897327f1fffc35c2dd6155ea872ca2c0865ee8105b11be06e8eb89 WatchSource:0}: Error finding container 7606fb3ee7897327f1fffc35c2dd6155ea872ca2c0865ee8105b11be06e8eb89: Status 404 returned error can't find the container with id 7606fb3ee7897327f1fffc35c2dd6155ea872ca2c0865ee8105b11be06e8eb89 Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.320599 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/358fc99d-f65f-48d7-97bc-61e86bb73c59-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.320649 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/358fc99d-f65f-48d7-97bc-61e86bb73c59-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.377547 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.381187 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404172 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cc4995647-qjqr8"] Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404835 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f" containerName="pruner" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404857 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f" containerName="pruner" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404871 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerName="controller-manager" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404878 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerName="controller-manager" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404887 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c0b0461-cdc7-4453-9766-eb3dc5385423" containerName="route-controller-manager" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404892 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0b0461-cdc7-4453-9766-eb3dc5385423" containerName="route-controller-manager" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404902 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="358fc99d-f65f-48d7-97bc-61e86bb73c59" containerName="pruner" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404907 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="358fc99d-f65f-48d7-97bc-61e86bb73c59" containerName="pruner" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.404990 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f" containerName="pruner" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.405000 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="1c0b0461-cdc7-4453-9766-eb3dc5385423" containerName="route-controller-manager" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.405011 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="358fc99d-f65f-48d7-97bc-61e86bb73c59" containerName="pruner" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.405019 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" containerName="controller-manager" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421556 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjjvl\" (UniqueName: \"kubernetes.io/projected/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-kube-api-access-jjjvl\") pod \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421599 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnj7s\" (UniqueName: \"kubernetes.io/projected/1c0b0461-cdc7-4453-9766-eb3dc5385423-kube-api-access-pnj7s\") pod \"1c0b0461-cdc7-4453-9766-eb3dc5385423\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421677 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-serving-cert\") pod \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421727 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c0b0461-cdc7-4453-9766-eb3dc5385423-tmp\") pod \"1c0b0461-cdc7-4453-9766-eb3dc5385423\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421751 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-tmp\") pod \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421798 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-client-ca\") pod \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421818 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-client-ca\") pod \"1c0b0461-cdc7-4453-9766-eb3dc5385423\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421851 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-config\") pod \"1c0b0461-cdc7-4453-9766-eb3dc5385423\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421917 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0b0461-cdc7-4453-9766-eb3dc5385423-serving-cert\") pod \"1c0b0461-cdc7-4453-9766-eb3dc5385423\" (UID: \"1c0b0461-cdc7-4453-9766-eb3dc5385423\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.421994 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-proxy-ca-bundles\") pod \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.422039 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-config\") pod \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\" (UID: \"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae\") " Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.424516 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-config" (OuterVolumeSpecName: "config") pod "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" (UID: "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.425005 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0b0461-cdc7-4453-9766-eb3dc5385423-tmp" (OuterVolumeSpecName: "tmp") pod "1c0b0461-cdc7-4453-9766-eb3dc5385423" (UID: "1c0b0461-cdc7-4453-9766-eb3dc5385423"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.425415 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-client-ca" (OuterVolumeSpecName: "client-ca") pod "1c0b0461-cdc7-4453-9766-eb3dc5385423" (UID: "1c0b0461-cdc7-4453-9766-eb3dc5385423"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.425597 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-config" (OuterVolumeSpecName: "config") pod "1c0b0461-cdc7-4453-9766-eb3dc5385423" (UID: "1c0b0461-cdc7-4453-9766-eb3dc5385423"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.425834 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-tmp" (OuterVolumeSpecName: "tmp") pod "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" (UID: "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.425899 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" (UID: "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.425956 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" (UID: "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.426470 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-kube-api-access-jjjvl" (OuterVolumeSpecName: "kube-api-access-jjjvl") pod "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" (UID: "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae"). InnerVolumeSpecName "kube-api-access-jjjvl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.431005 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0b0461-cdc7-4453-9766-eb3dc5385423-kube-api-access-pnj7s" (OuterVolumeSpecName: "kube-api-access-pnj7s") pod "1c0b0461-cdc7-4453-9766-eb3dc5385423" (UID: "1c0b0461-cdc7-4453-9766-eb3dc5385423"). InnerVolumeSpecName "kube-api-access-pnj7s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.432952 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0b0461-cdc7-4453-9766-eb3dc5385423-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1c0b0461-cdc7-4453-9766-eb3dc5385423" (UID: "1c0b0461-cdc7-4453-9766-eb3dc5385423"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.435595 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" (UID: "ac07f35f-c1de-4d9f-9f8e-2eb135e271ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524180 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524232 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c0b0461-cdc7-4453-9766-eb3dc5385423-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524244 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524254 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524266 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524277 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c0b0461-cdc7-4453-9766-eb3dc5385423-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524285 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c0b0461-cdc7-4453-9766-eb3dc5385423-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524293 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524304 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524312 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jjjvl\" (UniqueName: \"kubernetes.io/projected/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae-kube-api-access-jjjvl\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:56 crc kubenswrapper[5099]: I1212 15:23:56.524320 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pnj7s\" (UniqueName: \"kubernetes.io/projected/1c0b0461-cdc7-4453-9766-eb3dc5385423-kube-api-access-pnj7s\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:57 crc kubenswrapper[5099]: I1212 15:23:57.150382 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:57 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:57 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:57 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:57 crc kubenswrapper[5099]: I1212 15:23:57.150734 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:57 crc kubenswrapper[5099]: I1212 15:23:57.855620 5099 generic.go:358] "Generic (PLEG): container finished" podID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerID="5e71dd88e1f4ec7fa52bd6823865b751fc7a5da7cfeb99d03fe824a21012abb1" exitCode=0 Dec 12 15:23:57 crc kubenswrapper[5099]: I1212 15:23:57.857421 5099 generic.go:358] "Generic (PLEG): container finished" podID="91162a66-bdaa-4786-ad25-bde12241ebae" containerID="a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7" exitCode=0 Dec 12 15:23:57 crc kubenswrapper[5099]: I1212 15:23:57.860501 5099 generic.go:358] "Generic (PLEG): container finished" podID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerID="34d272199636b7e4436bbdbdcd8195803427324fc0c3d4271493e9620662ba4e" exitCode=0 Dec 12 15:23:57 crc kubenswrapper[5099]: I1212 15:23:57.862871 5099 generic.go:358] "Generic (PLEG): container finished" podID="7becc184-0a0c-4a25-919f-6359f1da964e" containerID="db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02" exitCode=0 Dec 12 15:23:58 crc kubenswrapper[5099]: I1212 15:23:58.150420 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:58 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:58 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:58 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:58 crc kubenswrapper[5099]: I1212 15:23:58.150533 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:59 crc kubenswrapper[5099]: E1212 15:23:59.092370 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:23:59 crc kubenswrapper[5099]: E1212 15:23:59.094723 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:23:59 crc kubenswrapper[5099]: E1212 15:23:59.096930 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:23:59 crc kubenswrapper[5099]: E1212 15:23:59.097021 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:23:59 crc kubenswrapper[5099]: I1212 15:23:59.149899 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:23:59 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:23:59 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:23:59 crc kubenswrapper[5099]: healthz check failed Dec 12 15:23:59 crc kubenswrapper[5099]: I1212 15:23:59.149993 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:23:59 crc kubenswrapper[5099]: I1212 15:23:59.340195 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53744: no serving certificate available for the kubelet" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.149437 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:24:00 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:24:00 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:24:00 crc kubenswrapper[5099]: healthz check failed Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.149535 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.561741 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-xpdmq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.561829 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-xpdmq" podUID="aa68d84c-a712-4979-afe1-bdb4f8329372" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.578647 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.579390 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.580059 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.580644 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.582336 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.590495 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.592048 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.593776 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.600445 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.601384 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.608343 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.610823 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.612889 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615317 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4995647-qjqr8"] Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615359 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"4aed11c0-468e-4ff4-aff1-c6a4c8d3a97f","Type":"ContainerDied","Data":"a2f4b9ba312a31228dcb44b4eec73ef55a6a2a490bfe5fe86e40a7fe731d46d7"} Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615394 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f4b9ba312a31228dcb44b4eec73ef55a6a2a490bfe5fe86e40a7fe731d46d7" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615410 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"358fc99d-f65f-48d7-97bc-61e86bb73c59","Type":"ContainerDied","Data":"da83e0b2abda99b5dadaf45cee58a8952aff3ca779234cdd6c65e93360269193"} Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615427 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da83e0b2abda99b5dadaf45cee58a8952aff3ca779234cdd6c65e93360269193" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615437 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86" event={"ID":"1c0b0461-cdc7-4453-9766-eb3dc5385423","Type":"ContainerDied","Data":"fb5df76e839bca6872314ca26ea198055979be24d75c22413a9f9f3766c6c3b6"} Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615452 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfqhr" event={"ID":"91162a66-bdaa-4786-ad25-bde12241ebae","Type":"ContainerStarted","Data":"a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7"} Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615465 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-48xth" event={"ID":"ac07f35f-c1de-4d9f-9f8e-2eb135e271ae","Type":"ContainerDied","Data":"4c2fb33569f43c57b1c3d0221d356ae63a8a04f8f2aba09c8533f6c5c5659e11"} Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615477 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h686" event={"ID":"3adb4515-a2d2-4849-a626-81443f61d9d2","Type":"ContainerStarted","Data":"6e904ae9f6b852347f1d0949fd4fd5bc4b6bd3102961ef23e700d75e4c0e45fa"} Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615497 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbzr" event={"ID":"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae","Type":"ContainerStarted","Data":"34d272199636b7e4436bbdbdcd8195803427324fc0c3d4271493e9620662ba4e"} Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.615511 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp"] Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.616213 5099 scope.go:117] "RemoveContainer" containerID="91c9268029a6bdf10aa982e39b6977f79804db02bd518e8f3a97420bdea09a9d" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.649401 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffe20-cc14-4acf-a082-be6f5f093040-tmp\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.649500 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-config\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.649640 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-client-ca\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.649801 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhmm\" (UniqueName: \"kubernetes.io/projected/3d6ffe20-cc14-4acf-a082-be6f5f093040-kube-api-access-9xhmm\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.649837 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6ffe20-cc14-4acf-a082-be6f5f093040-serving-cert\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.649888 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-proxy-ca-bundles\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.676072 5099 scope.go:117] "RemoveContainer" containerID="d72c88520bcf5ca60912d771527488a0fbbc6bc282947bc0fec93bdd220e95c2" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.752759 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhmm\" (UniqueName: \"kubernetes.io/projected/3d6ffe20-cc14-4acf-a082-be6f5f093040-kube-api-access-9xhmm\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.752809 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6ffe20-cc14-4acf-a082-be6f5f093040-serving-cert\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.752838 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-proxy-ca-bundles\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.752938 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffe20-cc14-4acf-a082-be6f5f093040-tmp\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.753372 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-config\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.753441 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-client-ca\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.755045 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-client-ca\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.756037 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-config\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.756507 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffe20-cc14-4acf-a082-be6f5f093040-tmp\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.756723 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-proxy-ca-bundles\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.765570 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6ffe20-cc14-4acf-a082-be6f5f093040-serving-cert\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.772440 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhmm\" (UniqueName: \"kubernetes.io/projected/3d6ffe20-cc14-4acf-a082-be6f5f093040-kube-api-access-9xhmm\") pod \"controller-manager-5cc4995647-qjqr8\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:00 crc kubenswrapper[5099]: I1212 15:24:00.957641 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.029652 5099 generic.go:358] "Generic (PLEG): container finished" podID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerID="cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874" exitCode=0 Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.088900 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp"] Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.088940 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqx68" event={"ID":"7becc184-0a0c-4a25-919f-6359f1da964e","Type":"ContainerStarted","Data":"db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02"} Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.090213 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" event={"ID":"a8221fb7-b435-4a06-8a6d-7bcc4afda383","Type":"ContainerStarted","Data":"7606fb3ee7897327f1fffc35c2dd6155ea872ca2c0865ee8105b11be06e8eb89"} Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.090234 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8tnw" event={"ID":"a128b003-3b72-492a-af4a-a15e2f4f1c7a","Type":"ContainerStarted","Data":"5e71dd88e1f4ec7fa52bd6823865b751fc7a5da7cfeb99d03fe824a21012abb1"} Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.090315 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86"] Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.090337 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8tnw" event={"ID":"a128b003-3b72-492a-af4a-a15e2f4f1c7a","Type":"ContainerDied","Data":"5e71dd88e1f4ec7fa52bd6823865b751fc7a5da7cfeb99d03fe824a21012abb1"} Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.090360 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-22f86"] Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.090450 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.113763 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.113941 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.114922 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.116162 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.116200 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.117371 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfqhr" event={"ID":"91162a66-bdaa-4786-ad25-bde12241ebae","Type":"ContainerDied","Data":"a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7"} Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.117427 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbzr" event={"ID":"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae","Type":"ContainerDied","Data":"34d272199636b7e4436bbdbdcd8195803427324fc0c3d4271493e9620662ba4e"} Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.117446 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-48xth"] Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.117469 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqx68" event={"ID":"7becc184-0a0c-4a25-919f-6359f1da964e","Type":"ContainerDied","Data":"db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02"} Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.117498 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-48xth"] Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.117520 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx984" event={"ID":"0ec8ac77-8f80-4a46-b769-37952a91485c","Type":"ContainerDied","Data":"cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874"} Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.117593 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.151057 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:24:01 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:24:01 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:24:01 crc kubenswrapper[5099]: healthz check failed Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.151137 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.163210 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-config\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.163585 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0d337c-2838-4d46-830d-17a540e4d7ae-serving-cert\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.163702 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tph4l\" (UniqueName: \"kubernetes.io/projected/cc0d337c-2838-4d46-830d-17a540e4d7ae-kube-api-access-tph4l\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.165248 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-client-ca\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.165354 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc0d337c-2838-4d46-830d-17a540e4d7ae-tmp\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.428453 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-config\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.428875 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0d337c-2838-4d46-830d-17a540e4d7ae-serving-cert\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.428935 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tph4l\" (UniqueName: \"kubernetes.io/projected/cc0d337c-2838-4d46-830d-17a540e4d7ae-kube-api-access-tph4l\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.428989 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-client-ca\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.429009 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc0d337c-2838-4d46-830d-17a540e4d7ae-tmp\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.429610 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc0d337c-2838-4d46-830d-17a540e4d7ae-tmp\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.430729 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-config\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.450523 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0d337c-2838-4d46-830d-17a540e4d7ae-serving-cert\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.454599 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tph4l\" (UniqueName: \"kubernetes.io/projected/cc0d337c-2838-4d46-830d-17a540e4d7ae-kube-api-access-tph4l\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.499939 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-client-ca\") pod \"route-controller-manager-776dc4f556-7dsdp\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.518307 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:01 crc kubenswrapper[5099]: I1212 15:24:01.784426 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4995647-qjqr8"] Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.049295 5099 generic.go:358] "Generic (PLEG): container finished" podID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerID="884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718" exitCode=0 Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.049792 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qwhh" event={"ID":"4a8a0d08-daaa-416f-b5b9-78c49ab92283","Type":"ContainerDied","Data":"884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718"} Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.059230 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" event={"ID":"a8221fb7-b435-4a06-8a6d-7bcc4afda383","Type":"ContainerStarted","Data":"34657fc68ce55220860421524cc9058b020291ff4143d2ad2b1abad87745bf13"} Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.059588 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.067877 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" event={"ID":"3d6ffe20-cc14-4acf-a082-be6f5f093040","Type":"ContainerStarted","Data":"0567439cdf35693f837158708c5988cdc56d538b34cfc5b4d3bf6ac19ae830c3"} Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.071330 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp"] Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.072405 5099 generic.go:358] "Generic (PLEG): container finished" podID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerID="0c90461e952f1ed81818a648eed1a94a8fb18a95feada73147b0a351a77be4ac" exitCode=0 Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.072563 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rxkm" event={"ID":"b9acc6f5-a7ea-4640-b35d-014f831b262d","Type":"ContainerDied","Data":"0c90461e952f1ed81818a648eed1a94a8fb18a95feada73147b0a351a77be4ac"} Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.080065 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-87m2r" event={"ID":"42d147e5-750e-4c46-bb7a-e99a34fca2f9","Type":"ContainerStarted","Data":"73645a3c43c1ef82609189af3ee864f654ee678f549b521046d6c3e2ad5d1cf7"} Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.080605 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.081080 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.081143 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.152949 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgl7b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:24:02 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Dec 12 15:24:02 crc kubenswrapper[5099]: [+]process-running ok Dec 12 15:24:02 crc kubenswrapper[5099]: healthz check failed Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.153051 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" podUID="174ac316-3890-4143-b377-559d8d137c5c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.643325 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" podStartSLOduration=156.643246705 podStartE2EDuration="2m36.643246705s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:02.606322142 +0000 UTC m=+180.710230803" watchObservedRunningTime="2025-12-12 15:24:02.643246705 +0000 UTC m=+180.747155346" Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.703492 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0b0461-cdc7-4453-9766-eb3dc5385423" path="/var/lib/kubelet/pods/1c0b0461-cdc7-4453-9766-eb3dc5385423/volumes" Dec 12 15:24:02 crc kubenswrapper[5099]: I1212 15:24:02.704335 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac07f35f-c1de-4d9f-9f8e-2eb135e271ae" path="/var/lib/kubelet/pods/ac07f35f-c1de-4d9f-9f8e-2eb135e271ae/volumes" Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.094877 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" event={"ID":"3d6ffe20-cc14-4acf-a082-be6f5f093040","Type":"ContainerStarted","Data":"f95c89425153d7fa4a7f518c5098c8b0e218dde53d08030271cb25de4d26f4f1"} Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.096209 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.099149 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" event={"ID":"cc0d337c-2838-4d46-830d-17a540e4d7ae","Type":"ContainerStarted","Data":"d7612cc49ffa7015df59515fb9370085f351b2ecedd6d176c1aeb6aac514a2ff"} Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.106580 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h686" event={"ID":"3adb4515-a2d2-4849-a626-81443f61d9d2","Type":"ContainerStarted","Data":"d2fe5e92ba6856888e2977b0e5708def2e8a1e1274a0e98daedee7a895673ab2"} Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.113505 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.113576 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.129224 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" podStartSLOduration=10.129173576 podStartE2EDuration="10.129173576s" podCreationTimestamp="2025-12-12 15:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:03.116222089 +0000 UTC m=+181.220130750" watchObservedRunningTime="2025-12-12 15:24:03.129173576 +0000 UTC m=+181.233082217" Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.149434 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7h686" podStartSLOduration=46.149412863 podStartE2EDuration="46.149412863s" podCreationTimestamp="2025-12-12 15:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:03.149010773 +0000 UTC m=+181.252919414" watchObservedRunningTime="2025-12-12 15:24:03.149412863 +0000 UTC m=+181.253321504" Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.160876 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.166292 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-bgl7b" Dec 12 15:24:03 crc kubenswrapper[5099]: I1212 15:24:03.783215 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:04 crc kubenswrapper[5099]: I1212 15:24:04.119736 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" event={"ID":"cc0d337c-2838-4d46-830d-17a540e4d7ae","Type":"ContainerStarted","Data":"ebaca2347d6d4dd6255397d18f1455a4056ae808ae06db8fdba896b3c3e23f18"} Dec 12 15:24:05 crc kubenswrapper[5099]: I1212 15:24:05.142431 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:05 crc kubenswrapper[5099]: I1212 15:24:05.174901 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" podStartSLOduration=12.174869292 podStartE2EDuration="12.174869292s" podCreationTimestamp="2025-12-12 15:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:05.166292758 +0000 UTC m=+183.270201399" watchObservedRunningTime="2025-12-12 15:24:05.174869292 +0000 UTC m=+183.278777933" Dec 12 15:24:05 crc kubenswrapper[5099]: I1212 15:24:05.447558 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:07 crc kubenswrapper[5099]: I1212 15:24:07.165597 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kf45z" Dec 12 15:24:07 crc kubenswrapper[5099]: I1212 15:24:07.781537 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4995647-qjqr8"] Dec 12 15:24:07 crc kubenswrapper[5099]: I1212 15:24:07.784025 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" podUID="3d6ffe20-cc14-4acf-a082-be6f5f093040" containerName="controller-manager" containerID="cri-o://f95c89425153d7fa4a7f518c5098c8b0e218dde53d08030271cb25de4d26f4f1" gracePeriod=30 Dec 12 15:24:07 crc kubenswrapper[5099]: I1212 15:24:07.834451 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp"] Dec 12 15:24:08 crc kubenswrapper[5099]: I1212 15:24:08.552265 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" podUID="cc0d337c-2838-4d46-830d-17a540e4d7ae" containerName="route-controller-manager" containerID="cri-o://ebaca2347d6d4dd6255397d18f1455a4056ae808ae06db8fdba896b3c3e23f18" gracePeriod=30 Dec 12 15:24:09 crc kubenswrapper[5099]: E1212 15:24:09.092955 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:09 crc kubenswrapper[5099]: E1212 15:24:09.095078 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:09 crc kubenswrapper[5099]: E1212 15:24:09.100787 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:09 crc kubenswrapper[5099]: E1212 15:24:09.100862 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:24:09 crc kubenswrapper[5099]: I1212 15:24:09.852093 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:09 crc kubenswrapper[5099]: I1212 15:24:09.852264 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:10 crc kubenswrapper[5099]: I1212 15:24:10.568746 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:24:10 crc kubenswrapper[5099]: I1212 15:24:10.573119 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-xpdmq" Dec 12 15:24:10 crc kubenswrapper[5099]: I1212 15:24:10.582704 5099 generic.go:358] "Generic (PLEG): container finished" podID="cc0d337c-2838-4d46-830d-17a540e4d7ae" containerID="ebaca2347d6d4dd6255397d18f1455a4056ae808ae06db8fdba896b3c3e23f18" exitCode=0 Dec 12 15:24:10 crc kubenswrapper[5099]: I1212 15:24:10.582809 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" event={"ID":"cc0d337c-2838-4d46-830d-17a540e4d7ae","Type":"ContainerDied","Data":"ebaca2347d6d4dd6255397d18f1455a4056ae808ae06db8fdba896b3c3e23f18"} Dec 12 15:24:10 crc kubenswrapper[5099]: I1212 15:24:10.588019 5099 generic.go:358] "Generic (PLEG): container finished" podID="3d6ffe20-cc14-4acf-a082-be6f5f093040" containerID="f95c89425153d7fa4a7f518c5098c8b0e218dde53d08030271cb25de4d26f4f1" exitCode=0 Dec 12 15:24:10 crc kubenswrapper[5099]: I1212 15:24:10.588072 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" event={"ID":"3d6ffe20-cc14-4acf-a082-be6f5f093040","Type":"ContainerDied","Data":"f95c89425153d7fa4a7f518c5098c8b0e218dde53d08030271cb25de4d26f4f1"} Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.300503 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.306711 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.330816 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf"] Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.331879 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc0d337c-2838-4d46-830d-17a540e4d7ae" containerName="route-controller-manager" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.331913 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0d337c-2838-4d46-830d-17a540e4d7ae" containerName="route-controller-manager" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.331933 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d6ffe20-cc14-4acf-a082-be6f5f093040" containerName="controller-manager" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.331941 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6ffe20-cc14-4acf-a082-be6f5f093040" containerName="controller-manager" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.332100 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="cc0d337c-2838-4d46-830d-17a540e4d7ae" containerName="route-controller-manager" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.332118 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3d6ffe20-cc14-4acf-a082-be6f5f093040" containerName="controller-manager" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.449829 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tph4l\" (UniqueName: \"kubernetes.io/projected/cc0d337c-2838-4d46-830d-17a540e4d7ae-kube-api-access-tph4l\") pod \"cc0d337c-2838-4d46-830d-17a540e4d7ae\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.449971 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0d337c-2838-4d46-830d-17a540e4d7ae-serving-cert\") pod \"cc0d337c-2838-4d46-830d-17a540e4d7ae\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450027 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-config\") pod \"3d6ffe20-cc14-4acf-a082-be6f5f093040\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450071 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-client-ca\") pod \"3d6ffe20-cc14-4acf-a082-be6f5f093040\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450108 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffe20-cc14-4acf-a082-be6f5f093040-tmp\") pod \"3d6ffe20-cc14-4acf-a082-be6f5f093040\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450177 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc0d337c-2838-4d46-830d-17a540e4d7ae-tmp\") pod \"cc0d337c-2838-4d46-830d-17a540e4d7ae\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450269 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-proxy-ca-bundles\") pod \"3d6ffe20-cc14-4acf-a082-be6f5f093040\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450329 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-client-ca\") pod \"cc0d337c-2838-4d46-830d-17a540e4d7ae\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450454 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6ffe20-cc14-4acf-a082-be6f5f093040-serving-cert\") pod \"3d6ffe20-cc14-4acf-a082-be6f5f093040\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450506 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xhmm\" (UniqueName: \"kubernetes.io/projected/3d6ffe20-cc14-4acf-a082-be6f5f093040-kube-api-access-9xhmm\") pod \"3d6ffe20-cc14-4acf-a082-be6f5f093040\" (UID: \"3d6ffe20-cc14-4acf-a082-be6f5f093040\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.450534 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-config\") pod \"cc0d337c-2838-4d46-830d-17a540e4d7ae\" (UID: \"cc0d337c-2838-4d46-830d-17a540e4d7ae\") " Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.451279 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0d337c-2838-4d46-830d-17a540e4d7ae-tmp" (OuterVolumeSpecName: "tmp") pod "cc0d337c-2838-4d46-830d-17a540e4d7ae" (UID: "cc0d337c-2838-4d46-830d-17a540e4d7ae"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.451428 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d6ffe20-cc14-4acf-a082-be6f5f093040" (UID: "3d6ffe20-cc14-4acf-a082-be6f5f093040"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.451728 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6ffe20-cc14-4acf-a082-be6f5f093040-tmp" (OuterVolumeSpecName: "tmp") pod "3d6ffe20-cc14-4acf-a082-be6f5f093040" (UID: "3d6ffe20-cc14-4acf-a082-be6f5f093040"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.451797 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3d6ffe20-cc14-4acf-a082-be6f5f093040" (UID: "3d6ffe20-cc14-4acf-a082-be6f5f093040"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.452232 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-config" (OuterVolumeSpecName: "config") pod "cc0d337c-2838-4d46-830d-17a540e4d7ae" (UID: "cc0d337c-2838-4d46-830d-17a540e4d7ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.452442 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "cc0d337c-2838-4d46-830d-17a540e4d7ae" (UID: "cc0d337c-2838-4d46-830d-17a540e4d7ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.452928 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-config" (OuterVolumeSpecName: "config") pod "3d6ffe20-cc14-4acf-a082-be6f5f093040" (UID: "3d6ffe20-cc14-4acf-a082-be6f5f093040"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.459927 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6ffe20-cc14-4acf-a082-be6f5f093040-kube-api-access-9xhmm" (OuterVolumeSpecName: "kube-api-access-9xhmm") pod "3d6ffe20-cc14-4acf-a082-be6f5f093040" (UID: "3d6ffe20-cc14-4acf-a082-be6f5f093040"). InnerVolumeSpecName "kube-api-access-9xhmm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.459929 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6ffe20-cc14-4acf-a082-be6f5f093040-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d6ffe20-cc14-4acf-a082-be6f5f093040" (UID: "3d6ffe20-cc14-4acf-a082-be6f5f093040"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.459951 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0d337c-2838-4d46-830d-17a540e4d7ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc0d337c-2838-4d46-830d-17a540e4d7ae" (UID: "cc0d337c-2838-4d46-830d-17a540e4d7ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.459971 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0d337c-2838-4d46-830d-17a540e4d7ae-kube-api-access-tph4l" (OuterVolumeSpecName: "kube-api-access-tph4l") pod "cc0d337c-2838-4d46-830d-17a540e4d7ae" (UID: "cc0d337c-2838-4d46-830d-17a540e4d7ae"). InnerVolumeSpecName "kube-api-access-tph4l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.551871 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xhmm\" (UniqueName: \"kubernetes.io/projected/3d6ffe20-cc14-4acf-a082-be6f5f093040-kube-api-access-9xhmm\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.551922 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.551935 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tph4l\" (UniqueName: \"kubernetes.io/projected/cc0d337c-2838-4d46-830d-17a540e4d7ae-kube-api-access-tph4l\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.551947 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0d337c-2838-4d46-830d-17a540e4d7ae-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.551961 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.551975 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.551987 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffe20-cc14-4acf-a082-be6f5f093040-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.552006 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc0d337c-2838-4d46-830d-17a540e4d7ae-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.552020 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d6ffe20-cc14-4acf-a082-be6f5f093040-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.552031 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d337c-2838-4d46-830d-17a540e4d7ae-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:11 crc kubenswrapper[5099]: I1212 15:24:11.552041 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6ffe20-cc14-4acf-a082-be6f5f093040-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.251134 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.251351 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.795468 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" event={"ID":"cc0d337c-2838-4d46-830d-17a540e4d7ae","Type":"ContainerDied","Data":"d7612cc49ffa7015df59515fb9370085f351b2ecedd6d176c1aeb6aac514a2ff"} Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.795597 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf"] Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.795633 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" event={"ID":"3d6ffe20-cc14-4acf-a082-be6f5f093040","Type":"ContainerDied","Data":"0567439cdf35693f837158708c5988cdc56d538b34cfc5b4d3bf6ac19ae830c3"} Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.796281 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.796523 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc4995647-qjqr8" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.797216 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.799508 5099 scope.go:117] "RemoveContainer" containerID="ebaca2347d6d4dd6255397d18f1455a4056ae808ae06db8fdba896b3c3e23f18" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.801302 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.803125 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.809349 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.809768 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.809802 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.815995 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.825331 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-758f4758b8-59zs8"] Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.868881 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-client-ca\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.868936 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-config\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.869476 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-serving-cert\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.869501 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkcd9\" (UniqueName: \"kubernetes.io/projected/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-kube-api-access-nkcd9\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.869733 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-tmp\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.971490 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-serving-cert\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.971545 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nkcd9\" (UniqueName: \"kubernetes.io/projected/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-kube-api-access-nkcd9\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.971585 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-tmp\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.971705 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-client-ca\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.971998 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-config\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.972311 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-tmp\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.973127 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-client-ca\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.973458 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-config\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.990030 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-serving-cert\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:13 crc kubenswrapper[5099]: I1212 15:24:13.994339 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkcd9\" (UniqueName: \"kubernetes.io/projected/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-kube-api-access-nkcd9\") pod \"route-controller-manager-77cc6f5584-xfnqf\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.230684 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.272626 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758f4758b8-59zs8"] Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.272714 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4995647-qjqr8"] Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.272733 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4995647-qjqr8"] Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.272754 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp"] Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.272769 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776dc4f556-7dsdp"] Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.272939 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.275970 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.280360 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.281593 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.282323 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.455689 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.455928 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.460496 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.495975 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6ffe20-cc14-4acf-a082-be6f5f093040" path="/var/lib/kubelet/pods/3d6ffe20-cc14-4acf-a082-be6f5f093040/volumes" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.497185 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0d337c-2838-4d46-830d-17a540e4d7ae" path="/var/lib/kubelet/pods/cc0d337c-2838-4d46-830d-17a540e4d7ae/volumes" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.498137 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.553729 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-proxy-ca-bundles\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.553791 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-client-ca\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.553812 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-config\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.553830 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jkj\" (UniqueName: \"kubernetes.io/projected/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-kube-api-access-h8jkj\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.553882 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-tmp\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.553918 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-serving-cert\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.656248 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-proxy-ca-bundles\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.656752 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-client-ca\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.656920 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-config\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.657040 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jkj\" (UniqueName: \"kubernetes.io/projected/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-kube-api-access-h8jkj\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.657257 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-tmp\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.657415 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-serving-cert\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.657971 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-proxy-ca-bundles\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.660076 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-client-ca\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.660389 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-tmp\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.664470 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-config\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.673785 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-serving-cert\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.680778 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jkj\" (UniqueName: \"kubernetes.io/projected/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-kube-api-access-h8jkj\") pod \"controller-manager-758f4758b8-59zs8\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.687472 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.687677 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.695009 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.695308 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.703347 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-5wjhf_9fbc0f6e-a03e-414c-8f95-4bc036fac71b/kube-multus-additional-cni-plugins/0.log" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.703426 5099 generic.go:358] "Generic (PLEG): container finished" podID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" exitCode=137 Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.703837 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" event={"ID":"9fbc0f6e-a03e-414c-8f95-4bc036fac71b","Type":"ContainerDied","Data":"a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3"} Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.758563 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6634e23-c162-4979-935f-7737487537b4-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d6634e23-c162-4979-935f-7737487537b4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.759088 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6634e23-c162-4979-935f-7737487537b4-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d6634e23-c162-4979-935f-7737487537b4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.780162 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.861055 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6634e23-c162-4979-935f-7737487537b4-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d6634e23-c162-4979-935f-7737487537b4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.861193 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6634e23-c162-4979-935f-7737487537b4-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d6634e23-c162-4979-935f-7737487537b4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.861424 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6634e23-c162-4979-935f-7737487537b4-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d6634e23-c162-4979-935f-7737487537b4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:24:14 crc kubenswrapper[5099]: I1212 15:24:14.882925 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6634e23-c162-4979-935f-7737487537b4-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d6634e23-c162-4979-935f-7737487537b4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:24:15 crc kubenswrapper[5099]: I1212 15:24:15.035120 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:24:19 crc kubenswrapper[5099]: E1212 15:24:19.089885 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:19 crc kubenswrapper[5099]: E1212 15:24:19.090830 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:19 crc kubenswrapper[5099]: E1212 15:24:19.091456 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:19 crc kubenswrapper[5099]: E1212 15:24:19.091564 5099 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:24:19 crc kubenswrapper[5099]: I1212 15:24:19.612091 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 15:24:19 crc kubenswrapper[5099]: I1212 15:24:19.846706 5099 ???:1] "http: TLS handshake error from 192.168.126.11:48142: no serving certificate available for the kubelet" Dec 12 15:24:19 crc kubenswrapper[5099]: I1212 15:24:19.851906 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:19 crc kubenswrapper[5099]: I1212 15:24:19.852010 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.502916 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.510449 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.599440 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kube-api-access\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.599592 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.599644 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-var-lock\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.700556 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.700625 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-var-lock\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.700655 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kube-api-access\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.700771 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.700859 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-var-lock\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.724356 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kube-api-access\") pod \"installer-12-crc\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:22 crc kubenswrapper[5099]: I1212 15:24:22.825833 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:24:23 crc kubenswrapper[5099]: I1212 15:24:23.107734 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:23 crc kubenswrapper[5099]: I1212 15:24:23.110610 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:23 crc kubenswrapper[5099]: I1212 15:24:23.112511 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:24:29 crc kubenswrapper[5099]: E1212 15:24:29.091092 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:29 crc kubenswrapper[5099]: E1212 15:24:29.093904 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:29 crc kubenswrapper[5099]: E1212 15:24:29.094410 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:29 crc kubenswrapper[5099]: E1212 15:24:29.094476 5099 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:24:29 crc kubenswrapper[5099]: I1212 15:24:29.851257 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:29 crc kubenswrapper[5099]: I1212 15:24:29.851344 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:29 crc kubenswrapper[5099]: I1212 15:24:29.851420 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:24:29 crc kubenswrapper[5099]: I1212 15:24:29.852050 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:29 crc kubenswrapper[5099]: I1212 15:24:29.852098 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:29 crc kubenswrapper[5099]: I1212 15:24:29.852557 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"73645a3c43c1ef82609189af3ee864f654ee678f549b521046d6c3e2ad5d1cf7"} pod="openshift-console/downloads-747b44746d-87m2r" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 12 15:24:29 crc kubenswrapper[5099]: I1212 15:24:29.852614 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" containerID="cri-o://73645a3c43c1ef82609189af3ee864f654ee678f549b521046d6c3e2ad5d1cf7" gracePeriod=2 Dec 12 15:24:30 crc kubenswrapper[5099]: I1212 15:24:30.866143 5099 generic.go:358] "Generic (PLEG): container finished" podID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerID="73645a3c43c1ef82609189af3ee864f654ee678f549b521046d6c3e2ad5d1cf7" exitCode=0 Dec 12 15:24:30 crc kubenswrapper[5099]: I1212 15:24:30.866228 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-87m2r" event={"ID":"42d147e5-750e-4c46-bb7a-e99a34fca2f9","Type":"ContainerDied","Data":"73645a3c43c1ef82609189af3ee864f654ee678f549b521046d6c3e2ad5d1cf7"} Dec 12 15:24:39 crc kubenswrapper[5099]: E1212 15:24:39.090306 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:39 crc kubenswrapper[5099]: E1212 15:24:39.092245 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:39 crc kubenswrapper[5099]: E1212 15:24:39.093032 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:39 crc kubenswrapper[5099]: E1212 15:24:39.093134 5099 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:24:39 crc kubenswrapper[5099]: I1212 15:24:39.853384 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:39 crc kubenswrapper[5099]: I1212 15:24:39.853501 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:46 crc kubenswrapper[5099]: I1212 15:24:46.516214 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:24:46 crc kubenswrapper[5099]: I1212 15:24:46.516738 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:24:49 crc kubenswrapper[5099]: E1212 15:24:49.089963 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:49 crc kubenswrapper[5099]: E1212 15:24:49.090879 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:49 crc kubenswrapper[5099]: E1212 15:24:49.091470 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:24:49 crc kubenswrapper[5099]: E1212 15:24:49.091511 5099 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:24:49 crc kubenswrapper[5099]: I1212 15:24:49.852843 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:49 crc kubenswrapper[5099]: I1212 15:24:49.853259 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:52 crc kubenswrapper[5099]: I1212 15:24:52.903425 5099 scope.go:117] "RemoveContainer" containerID="f95c89425153d7fa4a7f518c5098c8b0e218dde53d08030271cb25de4d26f4f1" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.283173 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-5wjhf_9fbc0f6e-a03e-414c-8f95-4bc036fac71b/kube-multus-additional-cni-plugins/0.log" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.283686 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.378640 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist\") pod \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.378865 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxm7q\" (UniqueName: \"kubernetes.io/projected/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-kube-api-access-mxm7q\") pod \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.378972 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-ready\") pod \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.379005 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-tuning-conf-dir\") pod \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\" (UID: \"9fbc0f6e-a03e-414c-8f95-4bc036fac71b\") " Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.379321 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "9fbc0f6e-a03e-414c-8f95-4bc036fac71b" (UID: "9fbc0f6e-a03e-414c-8f95-4bc036fac71b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.379634 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-ready" (OuterVolumeSpecName: "ready") pod "9fbc0f6e-a03e-414c-8f95-4bc036fac71b" (UID: "9fbc0f6e-a03e-414c-8f95-4bc036fac71b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.380061 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "9fbc0f6e-a03e-414c-8f95-4bc036fac71b" (UID: "9fbc0f6e-a03e-414c-8f95-4bc036fac71b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.387620 5099 scope.go:117] "RemoveContainer" containerID="5cd9a0aab942a07b222395a3c34cd201537845f7a0fd7f5c8d7aa1a3f657f4b0" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.389008 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-kube-api-access-mxm7q" (OuterVolumeSpecName: "kube-api-access-mxm7q") pod "9fbc0f6e-a03e-414c-8f95-4bc036fac71b" (UID: "9fbc0f6e-a03e-414c-8f95-4bc036fac71b"). InnerVolumeSpecName "kube-api-access-mxm7q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.482104 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxm7q\" (UniqueName: \"kubernetes.io/projected/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-kube-api-access-mxm7q\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.482514 5099 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-ready\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.482533 5099 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.482555 5099 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9fbc0f6e-a03e-414c-8f95-4bc036fac71b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.765084 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758f4758b8-59zs8"] Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.839486 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.896208 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 15:24:56 crc kubenswrapper[5099]: I1212 15:24:56.953751 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf"] Dec 12 15:24:57 crc kubenswrapper[5099]: W1212 15:24:57.001735 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06d7a9f8_9cc7_4caa_b072_6f1c56abd81a.slice/crio-e0afb50290ce8112fbba728586e3f6270f63572f180644a7c4135ff60e5b7eb2 WatchSource:0}: Error finding container e0afb50290ce8112fbba728586e3f6270f63572f180644a7c4135ff60e5b7eb2: Status 404 returned error can't find the container with id e0afb50290ce8112fbba728586e3f6270f63572f180644a7c4135ff60e5b7eb2 Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.195274 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" event={"ID":"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a","Type":"ContainerStarted","Data":"e0afb50290ce8112fbba728586e3f6270f63572f180644a7c4135ff60e5b7eb2"} Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.203888 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rxkm" event={"ID":"b9acc6f5-a7ea-4640-b35d-014f831b262d","Type":"ContainerStarted","Data":"d4612ed2327191bbd221c38395d0ef39155a0852fdc60468bd8adc428b0c08db"} Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.208131 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d6634e23-c162-4979-935f-7737487537b4","Type":"ContainerStarted","Data":"5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b"} Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.210005 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-87m2r" event={"ID":"42d147e5-750e-4c46-bb7a-e99a34fca2f9","Type":"ContainerStarted","Data":"59a0c8863b379414ba5765f826a6f80d964d54c6affd9d9af768840cbfedd78d"} Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.210616 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.210991 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.211041 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.212865 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f","Type":"ContainerStarted","Data":"68581b6301e7c97b89bbebd2b213a5d88d4f6c34854dc32e4a696a26a375990c"} Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.214021 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" event={"ID":"192cc6bf-9cda-439b-a3ba-197e6eefb3a0","Type":"ContainerStarted","Data":"2adfbbd937ccddc6a5fd73b323de518ed2e9ec9d73936f58c219277de6e021dd"} Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.215896 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfqhr" event={"ID":"91162a66-bdaa-4786-ad25-bde12241ebae","Type":"ContainerStarted","Data":"a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4"} Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.234190 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-5wjhf_9fbc0f6e-a03e-414c-8f95-4bc036fac71b/kube-multus-additional-cni-plugins/0.log" Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.234590 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" event={"ID":"9fbc0f6e-a03e-414c-8f95-4bc036fac71b","Type":"ContainerDied","Data":"bc72143c336438804d305bdfa2264ed4fc89953d808810af010049c05f46e662"} Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.234716 5099 scope.go:117] "RemoveContainer" containerID="a9368e1081a189bb4f679a9de7c9416c3c40d6c7ed1e95c9bca0d126275acbc3" Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.234785 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-5wjhf" Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.424830 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-5wjhf"] Dec 12 15:24:57 crc kubenswrapper[5099]: I1212 15:24:57.429317 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-5wjhf"] Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.265206 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx984" event={"ID":"0ec8ac77-8f80-4a46-b769-37952a91485c","Type":"ContainerStarted","Data":"1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.272587 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" event={"ID":"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a","Type":"ContainerStarted","Data":"7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.274147 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.277023 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qwhh" event={"ID":"4a8a0d08-daaa-416f-b5b9-78c49ab92283","Type":"ContainerStarted","Data":"483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.282165 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8tnw" event={"ID":"a128b003-3b72-492a-af4a-a15e2f4f1c7a","Type":"ContainerStarted","Data":"03df598b0d3adc44625414c00f50e7e3b8d008e79b8bbf1dabdfc61495d7716a"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.284974 5099 generic.go:358] "Generic (PLEG): container finished" podID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerID="d4612ed2327191bbd221c38395d0ef39155a0852fdc60468bd8adc428b0c08db" exitCode=0 Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.285246 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rxkm" event={"ID":"b9acc6f5-a7ea-4640-b35d-014f831b262d","Type":"ContainerDied","Data":"d4612ed2327191bbd221c38395d0ef39155a0852fdc60468bd8adc428b0c08db"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.290578 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d6634e23-c162-4979-935f-7737487537b4","Type":"ContainerStarted","Data":"b17b1acb60adb878edcc04ae50354df31a8cff5f63a0f002ca585ab318b85960"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.315396 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmkwr" event={"ID":"50cbce4d-a234-4a2e-b683-8ecf21d93474","Type":"ContainerStarted","Data":"d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.330833 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f","Type":"ContainerStarted","Data":"94a27d2417f2326d9e22e718357765a366428b2bba7910a9a871cdff7c0da7cb"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.332873 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" event={"ID":"192cc6bf-9cda-439b-a3ba-197e6eefb3a0","Type":"ContainerStarted","Data":"ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.335994 5099 generic.go:358] "Generic (PLEG): container finished" podID="91162a66-bdaa-4786-ad25-bde12241ebae" containerID="a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4" exitCode=0 Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.340438 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfqhr" event={"ID":"91162a66-bdaa-4786-ad25-bde12241ebae","Type":"ContainerDied","Data":"a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.354852 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbzr" event={"ID":"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae","Type":"ContainerStarted","Data":"e91d917f2ec80b121f7b03887822d6187a2bc1050f6f1d8daf2172589a411299"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.383499 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqx68" event={"ID":"7becc184-0a0c-4a25-919f-6359f1da964e","Type":"ContainerStarted","Data":"fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b"} Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.392678 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.398794 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.636273 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" podStartSLOduration=51.636254883 podStartE2EDuration="51.636254883s" podCreationTimestamp="2025-12-12 15:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:58.558792427 +0000 UTC m=+236.662701078" watchObservedRunningTime="2025-12-12 15:24:58.636254883 +0000 UTC m=+236.740163524" Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.646068 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" path="/var/lib/kubelet/pods/9fbc0f6e-a03e-414c-8f95-4bc036fac71b/volumes" Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.694067 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.755600 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=44.75558053 podStartE2EDuration="44.75558053s" podCreationTimestamp="2025-12-12 15:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:58.713624918 +0000 UTC m=+236.817533559" watchObservedRunningTime="2025-12-12 15:24:58.75558053 +0000 UTC m=+236.859489171" Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.779264 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=39.77924516 podStartE2EDuration="39.77924516s" podCreationTimestamp="2025-12-12 15:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:58.778921232 +0000 UTC m=+236.882829873" watchObservedRunningTime="2025-12-12 15:24:58.77924516 +0000 UTC m=+236.883153801" Dec 12 15:24:58 crc kubenswrapper[5099]: I1212 15:24:58.969748 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" podStartSLOduration=51.969731071 podStartE2EDuration="51.969731071s" podCreationTimestamp="2025-12-12 15:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:58.969466244 +0000 UTC m=+237.073374875" watchObservedRunningTime="2025-12-12 15:24:58.969731071 +0000 UTC m=+237.073639712" Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.396952 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfqhr" event={"ID":"91162a66-bdaa-4786-ad25-bde12241ebae","Type":"ContainerStarted","Data":"47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01"} Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.408300 5099 generic.go:358] "Generic (PLEG): container finished" podID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerID="483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe" exitCode=0 Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.408468 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qwhh" event={"ID":"4a8a0d08-daaa-416f-b5b9-78c49ab92283","Type":"ContainerDied","Data":"483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe"} Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.414789 5099 generic.go:358] "Generic (PLEG): container finished" podID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerID="03df598b0d3adc44625414c00f50e7e3b8d008e79b8bbf1dabdfc61495d7716a" exitCode=0 Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.415012 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8tnw" event={"ID":"a128b003-3b72-492a-af4a-a15e2f4f1c7a","Type":"ContainerDied","Data":"03df598b0d3adc44625414c00f50e7e3b8d008e79b8bbf1dabdfc61495d7716a"} Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.436246 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sfqhr" podStartSLOduration=15.161742575 podStartE2EDuration="1m10.436215577s" podCreationTimestamp="2025-12-12 15:23:49 +0000 UTC" firstStartedPulling="2025-12-12 15:24:01.111398949 +0000 UTC m=+179.215307590" lastFinishedPulling="2025-12-12 15:24:56.385871951 +0000 UTC m=+234.489780592" observedRunningTime="2025-12-12 15:24:59.428294242 +0000 UTC m=+237.532202883" watchObservedRunningTime="2025-12-12 15:24:59.436215577 +0000 UTC m=+237.540124228" Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.450348 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.450691 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.450735 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.467964 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.850503 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:24:59 crc kubenswrapper[5099]: I1212 15:24:59.851021 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.459907 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rxkm" event={"ID":"b9acc6f5-a7ea-4640-b35d-014f831b262d","Type":"ContainerStarted","Data":"57798b3b72be1a9fc0947b343f01a58ef09b7e0d0e56d758e01ed5c5003c9ceb"} Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.464221 5099 generic.go:358] "Generic (PLEG): container finished" podID="d6634e23-c162-4979-935f-7737487537b4" containerID="b17b1acb60adb878edcc04ae50354df31a8cff5f63a0f002ca585ab318b85960" exitCode=0 Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.464367 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d6634e23-c162-4979-935f-7737487537b4","Type":"ContainerDied","Data":"b17b1acb60adb878edcc04ae50354df31a8cff5f63a0f002ca585ab318b85960"} Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.468519 5099 generic.go:358] "Generic (PLEG): container finished" podID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerID="d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43" exitCode=0 Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.479429 5099 generic.go:358] "Generic (PLEG): container finished" podID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerID="e91d917f2ec80b121f7b03887822d6187a2bc1050f6f1d8daf2172589a411299" exitCode=0 Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.480513 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmkwr" event={"ID":"50cbce4d-a234-4a2e-b683-8ecf21d93474","Type":"ContainerDied","Data":"d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43"} Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.480578 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbzr" event={"ID":"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae","Type":"ContainerDied","Data":"e91d917f2ec80b121f7b03887822d6187a2bc1050f6f1d8daf2172589a411299"} Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.483512 5099 generic.go:358] "Generic (PLEG): container finished" podID="7becc184-0a0c-4a25-919f-6359f1da964e" containerID="fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b" exitCode=0 Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.484807 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqx68" event={"ID":"7becc184-0a0c-4a25-919f-6359f1da964e","Type":"ContainerDied","Data":"fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b"} Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.499324 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8rxkm" podStartSLOduration=15.188416287999999 podStartE2EDuration="1m9.499299772s" podCreationTimestamp="2025-12-12 15:23:51 +0000 UTC" firstStartedPulling="2025-12-12 15:24:02.073616931 +0000 UTC m=+180.177525572" lastFinishedPulling="2025-12-12 15:24:56.384500415 +0000 UTC m=+234.488409056" observedRunningTime="2025-12-12 15:25:00.495440042 +0000 UTC m=+238.599348703" watchObservedRunningTime="2025-12-12 15:25:00.499299772 +0000 UTC m=+238.603208413" Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.535236 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.535486 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:25:00 crc kubenswrapper[5099]: I1212 15:25:00.930948 5099 ???:1] "http: TLS handshake error from 192.168.126.11:44264: no serving certificate available for the kubelet" Dec 12 15:25:01 crc kubenswrapper[5099]: I1212 15:25:01.983764 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.100060 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6634e23-c162-4979-935f-7737487537b4-kubelet-dir\") pod \"d6634e23-c162-4979-935f-7737487537b4\" (UID: \"d6634e23-c162-4979-935f-7737487537b4\") " Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.100122 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6634e23-c162-4979-935f-7737487537b4-kube-api-access\") pod \"d6634e23-c162-4979-935f-7737487537b4\" (UID: \"d6634e23-c162-4979-935f-7737487537b4\") " Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.100200 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6634e23-c162-4979-935f-7737487537b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d6634e23-c162-4979-935f-7737487537b4" (UID: "d6634e23-c162-4979-935f-7737487537b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.100353 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6634e23-c162-4979-935f-7737487537b4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.107480 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6634e23-c162-4979-935f-7737487537b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d6634e23-c162-4979-935f-7737487537b4" (UID: "d6634e23-c162-4979-935f-7737487537b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.201143 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6634e23-c162-4979-935f-7737487537b4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.499585 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d6634e23-c162-4979-935f-7737487537b4","Type":"ContainerDied","Data":"5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b"} Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.499654 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b" Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.499603 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:25:02 crc kubenswrapper[5099]: I1212 15:25:02.904464 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:25:03 crc kubenswrapper[5099]: I1212 15:25:03.688401 5099 generic.go:358] "Generic (PLEG): container finished" podID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerID="1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e" exitCode=0 Dec 12 15:25:03 crc kubenswrapper[5099]: I1212 15:25:03.688911 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx984" event={"ID":"0ec8ac77-8f80-4a46-b769-37952a91485c","Type":"ContainerDied","Data":"1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e"} Dec 12 15:25:03 crc kubenswrapper[5099]: I1212 15:25:03.696960 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qwhh" event={"ID":"4a8a0d08-daaa-416f-b5b9-78c49ab92283","Type":"ContainerStarted","Data":"07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186"} Dec 12 15:25:03 crc kubenswrapper[5099]: I1212 15:25:03.701014 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8tnw" event={"ID":"a128b003-3b72-492a-af4a-a15e2f4f1c7a","Type":"ContainerStarted","Data":"a836b981f5d9d2be29945009b89459dee51d6d335ede09f30be1290611dd9842"} Dec 12 15:25:03 crc kubenswrapper[5099]: I1212 15:25:03.980952 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8qwhh" podStartSLOduration=20.569961256 podStartE2EDuration="1m14.980896406s" podCreationTimestamp="2025-12-12 15:23:49 +0000 UTC" firstStartedPulling="2025-12-12 15:24:02.050797576 +0000 UTC m=+180.154706217" lastFinishedPulling="2025-12-12 15:24:56.461732726 +0000 UTC m=+234.565641367" observedRunningTime="2025-12-12 15:25:03.978963106 +0000 UTC m=+242.082871747" watchObservedRunningTime="2025-12-12 15:25:03.980896406 +0000 UTC m=+242.084805067" Dec 12 15:25:04 crc kubenswrapper[5099]: I1212 15:25:04.003407 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v8tnw" podStartSLOduration=21.406379952 podStartE2EDuration="1m17.003383385s" podCreationTimestamp="2025-12-12 15:23:47 +0000 UTC" firstStartedPulling="2025-12-12 15:24:01.090006281 +0000 UTC m=+179.193914922" lastFinishedPulling="2025-12-12 15:24:56.687009714 +0000 UTC m=+234.790918355" observedRunningTime="2025-12-12 15:25:03.99811844 +0000 UTC m=+242.102027081" watchObservedRunningTime="2025-12-12 15:25:04.003383385 +0000 UTC m=+242.107292026" Dec 12 15:25:04 crc kubenswrapper[5099]: I1212 15:25:04.249885 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:25:04 crc kubenswrapper[5099]: I1212 15:25:04.250015 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:25:04 crc kubenswrapper[5099]: I1212 15:25:04.574516 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:25:04 crc kubenswrapper[5099]: I1212 15:25:04.577899 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:25:04 crc kubenswrapper[5099]: I1212 15:25:04.996199 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmkwr" event={"ID":"50cbce4d-a234-4a2e-b683-8ecf21d93474","Type":"ContainerStarted","Data":"7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759"} Dec 12 15:25:05 crc kubenswrapper[5099]: I1212 15:25:05.734411 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8qwhh" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="registry-server" probeResult="failure" output=< Dec 12 15:25:05 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Dec 12 15:25:05 crc kubenswrapper[5099]: > Dec 12 15:25:05 crc kubenswrapper[5099]: I1212 15:25:05.747308 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rxkm" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="registry-server" probeResult="failure" output=< Dec 12 15:25:05 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Dec 12 15:25:05 crc kubenswrapper[5099]: > Dec 12 15:25:05 crc kubenswrapper[5099]: I1212 15:25:05.756882 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xmkwr" podStartSLOduration=17.127439485 podStartE2EDuration="1m18.75686107s" podCreationTimestamp="2025-12-12 15:23:47 +0000 UTC" firstStartedPulling="2025-12-12 15:23:54.800568683 +0000 UTC m=+172.904477314" lastFinishedPulling="2025-12-12 15:24:56.429990258 +0000 UTC m=+234.533898899" observedRunningTime="2025-12-12 15:25:05.752497847 +0000 UTC m=+243.856406518" watchObservedRunningTime="2025-12-12 15:25:05.75686107 +0000 UTC m=+243.860769711" Dec 12 15:25:05 crc kubenswrapper[5099]: E1212 15:25:05.868609 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice/crio-5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b\": RecentStats: unable to find data in memory cache]" Dec 12 15:25:07 crc kubenswrapper[5099]: I1212 15:25:07.080248 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbzr" event={"ID":"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae","Type":"ContainerStarted","Data":"e43216e203a2ee40b136fe3c22f651a257db021d3447bd95472bb89d968ed3de"} Dec 12 15:25:07 crc kubenswrapper[5099]: I1212 15:25:07.084566 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqx68" event={"ID":"7becc184-0a0c-4a25-919f-6359f1da964e","Type":"ContainerStarted","Data":"e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd"} Dec 12 15:25:07 crc kubenswrapper[5099]: I1212 15:25:07.086545 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx984" event={"ID":"0ec8ac77-8f80-4a46-b769-37952a91485c","Type":"ContainerStarted","Data":"5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5"} Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.259802 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.259892 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.427991 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tvbzr" podStartSLOduration=25.089539199 podStartE2EDuration="1m20.427975599s" podCreationTimestamp="2025-12-12 15:23:48 +0000 UTC" firstStartedPulling="2025-12-12 15:24:01.112389295 +0000 UTC m=+179.216297936" lastFinishedPulling="2025-12-12 15:24:56.450825695 +0000 UTC m=+234.554734336" observedRunningTime="2025-12-12 15:25:08.426401889 +0000 UTC m=+246.530310530" watchObservedRunningTime="2025-12-12 15:25:08.427975599 +0000 UTC m=+246.531884250" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.454640 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.454723 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.454740 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.455177 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.467515 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nx984" podStartSLOduration=23.898567554 podStartE2EDuration="1m18.467483818s" podCreationTimestamp="2025-12-12 15:23:50 +0000 UTC" firstStartedPulling="2025-12-12 15:24:02.079863514 +0000 UTC m=+180.183772155" lastFinishedPulling="2025-12-12 15:24:56.648779778 +0000 UTC m=+234.752688419" observedRunningTime="2025-12-12 15:25:08.454235366 +0000 UTC m=+246.558144007" watchObservedRunningTime="2025-12-12 15:25:08.467483818 +0000 UTC m=+246.571392459" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.512982 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.543854 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zqx68" podStartSLOduration=26.005150263 podStartE2EDuration="1m21.543831046s" podCreationTimestamp="2025-12-12 15:23:47 +0000 UTC" firstStartedPulling="2025-12-12 15:24:01.110006533 +0000 UTC m=+179.213915174" lastFinishedPulling="2025-12-12 15:24:56.648687316 +0000 UTC m=+234.752595957" observedRunningTime="2025-12-12 15:25:08.489246169 +0000 UTC m=+246.593154830" watchObservedRunningTime="2025-12-12 15:25:08.543831046 +0000 UTC m=+246.647739687" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.551003 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:25:08 crc kubenswrapper[5099]: I1212 15:25:08.586395 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:25:09 crc kubenswrapper[5099]: I1212 15:25:09.177113 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:25:09 crc kubenswrapper[5099]: I1212 15:25:09.177357 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:25:09 crc kubenswrapper[5099]: I1212 15:25:09.398562 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:25:09 crc kubenswrapper[5099]: I1212 15:25:09.451289 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:09 crc kubenswrapper[5099]: I1212 15:25:09.451388 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:09 crc kubenswrapper[5099]: I1212 15:25:09.846093 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zqx68" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="registry-server" probeResult="failure" output=< Dec 12 15:25:09 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Dec 12 15:25:09 crc kubenswrapper[5099]: > Dec 12 15:25:09 crc kubenswrapper[5099]: I1212 15:25:09.849970 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:09 crc kubenswrapper[5099]: I1212 15:25:09.850020 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:10 crc kubenswrapper[5099]: I1212 15:25:10.597345 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tvbzr" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="registry-server" probeResult="failure" output=< Dec 12 15:25:10 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Dec 12 15:25:10 crc kubenswrapper[5099]: > Dec 12 15:25:12 crc kubenswrapper[5099]: I1212 15:25:12.053135 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v8tnw"] Dec 12 15:25:12 crc kubenswrapper[5099]: I1212 15:25:12.053527 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v8tnw" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerName="registry-server" containerID="cri-o://a836b981f5d9d2be29945009b89459dee51d6d335ede09f30be1290611dd9842" gracePeriod=2 Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.124605 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.287521 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.327959 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.535054 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.535285 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.598794 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.686770 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.719591 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:25:14 crc kubenswrapper[5099]: I1212 15:25:14.788114 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:25:15 crc kubenswrapper[5099]: I1212 15:25:15.060830 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qwhh"] Dec 12 15:25:15 crc kubenswrapper[5099]: I1212 15:25:15.675246 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8qwhh" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="registry-server" containerID="cri-o://07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186" gracePeriod=2 Dec 12 15:25:16 crc kubenswrapper[5099]: E1212 15:25:16.010338 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice/crio-5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice\": RecentStats: unable to find data in memory cache]" Dec 12 15:25:16 crc kubenswrapper[5099]: I1212 15:25:16.515828 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:25:16 crc kubenswrapper[5099]: I1212 15:25:16.516169 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:25:16 crc kubenswrapper[5099]: I1212 15:25:16.686352 5099 generic.go:358] "Generic (PLEG): container finished" podID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerID="a836b981f5d9d2be29945009b89459dee51d6d335ede09f30be1290611dd9842" exitCode=0 Dec 12 15:25:16 crc kubenswrapper[5099]: I1212 15:25:16.686432 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8tnw" event={"ID":"a128b003-3b72-492a-af4a-a15e2f4f1c7a","Type":"ContainerDied","Data":"a836b981f5d9d2be29945009b89459dee51d6d335ede09f30be1290611dd9842"} Dec 12 15:25:16 crc kubenswrapper[5099]: I1212 15:25:16.861835 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rxkm"] Dec 12 15:25:16 crc kubenswrapper[5099]: I1212 15:25:16.862861 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8rxkm" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="registry-server" containerID="cri-o://57798b3b72be1a9fc0947b343f01a58ef09b7e0d0e56d758e01ed5c5003c9ceb" gracePeriod=2 Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.519616 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.574628 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.601686 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-catalog-content\") pod \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.601739 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-utilities\") pod \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.601765 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9znx\" (UniqueName: \"kubernetes.io/projected/a128b003-3b72-492a-af4a-a15e2f4f1c7a-kube-api-access-c9znx\") pod \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\" (UID: \"a128b003-3b72-492a-af4a-a15e2f4f1c7a\") " Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.601864 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-catalog-content\") pod \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.601901 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-utilities\") pod \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.601928 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt97t\" (UniqueName: \"kubernetes.io/projected/4a8a0d08-daaa-416f-b5b9-78c49ab92283-kube-api-access-vt97t\") pod \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\" (UID: \"4a8a0d08-daaa-416f-b5b9-78c49ab92283\") " Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.603597 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-utilities" (OuterVolumeSpecName: "utilities") pod "4a8a0d08-daaa-416f-b5b9-78c49ab92283" (UID: "4a8a0d08-daaa-416f-b5b9-78c49ab92283"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.603997 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-utilities" (OuterVolumeSpecName: "utilities") pod "a128b003-3b72-492a-af4a-a15e2f4f1c7a" (UID: "a128b003-3b72-492a-af4a-a15e2f4f1c7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.614406 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a8a0d08-daaa-416f-b5b9-78c49ab92283" (UID: "4a8a0d08-daaa-416f-b5b9-78c49ab92283"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.639327 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a128b003-3b72-492a-af4a-a15e2f4f1c7a-kube-api-access-c9znx" (OuterVolumeSpecName: "kube-api-access-c9znx") pod "a128b003-3b72-492a-af4a-a15e2f4f1c7a" (UID: "a128b003-3b72-492a-af4a-a15e2f4f1c7a"). InnerVolumeSpecName "kube-api-access-c9znx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.639491 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8a0d08-daaa-416f-b5b9-78c49ab92283-kube-api-access-vt97t" (OuterVolumeSpecName: "kube-api-access-vt97t") pod "4a8a0d08-daaa-416f-b5b9-78c49ab92283" (UID: "4a8a0d08-daaa-416f-b5b9-78c49ab92283"). InnerVolumeSpecName "kube-api-access-vt97t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.646623 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a128b003-3b72-492a-af4a-a15e2f4f1c7a" (UID: "a128b003-3b72-492a-af4a-a15e2f4f1c7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.693233 5099 generic.go:358] "Generic (PLEG): container finished" podID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerID="07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186" exitCode=0 Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.693314 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qwhh" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.693349 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qwhh" event={"ID":"4a8a0d08-daaa-416f-b5b9-78c49ab92283","Type":"ContainerDied","Data":"07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186"} Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.693394 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qwhh" event={"ID":"4a8a0d08-daaa-416f-b5b9-78c49ab92283","Type":"ContainerDied","Data":"10a51897c939da746fc4706c6ef88ce630a4e0c621ad5d0c83c3baa591d55ebf"} Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.693422 5099 scope.go:117] "RemoveContainer" containerID="07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.697544 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v8tnw" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.697603 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8tnw" event={"ID":"a128b003-3b72-492a-af4a-a15e2f4f1c7a","Type":"ContainerDied","Data":"1d7a719447cc857009696b5c4ab873bdebfa8cf69201eafbe0f45070aa0a3f64"} Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.700290 5099 generic.go:358] "Generic (PLEG): container finished" podID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerID="57798b3b72be1a9fc0947b343f01a58ef09b7e0d0e56d758e01ed5c5003c9ceb" exitCode=0 Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.700472 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rxkm" event={"ID":"b9acc6f5-a7ea-4640-b35d-014f831b262d","Type":"ContainerDied","Data":"57798b3b72be1a9fc0947b343f01a58ef09b7e0d0e56d758e01ed5c5003c9ceb"} Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.703430 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.703462 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a8a0d08-daaa-416f-b5b9-78c49ab92283-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.703510 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vt97t\" (UniqueName: \"kubernetes.io/projected/4a8a0d08-daaa-416f-b5b9-78c49ab92283-kube-api-access-vt97t\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.703526 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.703537 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a128b003-3b72-492a-af4a-a15e2f4f1c7a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.703548 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c9znx\" (UniqueName: \"kubernetes.io/projected/a128b003-3b72-492a-af4a-a15e2f4f1c7a-kube-api-access-c9znx\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.713775 5099 scope.go:117] "RemoveContainer" containerID="483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.720997 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qwhh"] Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.723083 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qwhh"] Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.733185 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v8tnw"] Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.736076 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v8tnw"] Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.742155 5099 scope.go:117] "RemoveContainer" containerID="884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.757020 5099 scope.go:117] "RemoveContainer" containerID="07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186" Dec 12 15:25:17 crc kubenswrapper[5099]: E1212 15:25:17.757522 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186\": container with ID starting with 07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186 not found: ID does not exist" containerID="07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.757583 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186"} err="failed to get container status \"07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186\": rpc error: code = NotFound desc = could not find container \"07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186\": container with ID starting with 07f3e5e86de957e62c16504e04f539d7583cd414161b8ed3e0efba43ac7bf186 not found: ID does not exist" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.757612 5099 scope.go:117] "RemoveContainer" containerID="483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe" Dec 12 15:25:17 crc kubenswrapper[5099]: E1212 15:25:17.758064 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe\": container with ID starting with 483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe not found: ID does not exist" containerID="483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.758100 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe"} err="failed to get container status \"483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe\": rpc error: code = NotFound desc = could not find container \"483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe\": container with ID starting with 483ccf52fb063073b4b7a9458d7fa9e545aa7222dbdee949eedf229a2bf4c9fe not found: ID does not exist" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.758120 5099 scope.go:117] "RemoveContainer" containerID="884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718" Dec 12 15:25:17 crc kubenswrapper[5099]: E1212 15:25:17.758379 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718\": container with ID starting with 884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718 not found: ID does not exist" containerID="884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.758420 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718"} err="failed to get container status \"884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718\": rpc error: code = NotFound desc = could not find container \"884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718\": container with ID starting with 884db45ce901032530a365ab06d1ef46185d1c6662b7f721887abb7ddb654718 not found: ID does not exist" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.758442 5099 scope.go:117] "RemoveContainer" containerID="a836b981f5d9d2be29945009b89459dee51d6d335ede09f30be1290611dd9842" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.777979 5099 scope.go:117] "RemoveContainer" containerID="03df598b0d3adc44625414c00f50e7e3b8d008e79b8bbf1dabdfc61495d7716a" Dec 12 15:25:17 crc kubenswrapper[5099]: I1212 15:25:17.793875 5099 scope.go:117] "RemoveContainer" containerID="5e71dd88e1f4ec7fa52bd6823865b751fc7a5da7cfeb99d03fe824a21012abb1" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.475535 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" path="/var/lib/kubelet/pods/4a8a0d08-daaa-416f-b5b9-78c49ab92283/volumes" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.476701 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" path="/var/lib/kubelet/pods/a128b003-3b72-492a-af4a-a15e2f4f1c7a/volumes" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.508549 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.550345 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.644054 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.708061 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rxkm" event={"ID":"b9acc6f5-a7ea-4640-b35d-014f831b262d","Type":"ContainerDied","Data":"1417f5316fb8c887cafd413dd6fbe56a849ca5660ef3b4a2180d35d0e8dc77b0"} Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.708130 5099 scope.go:117] "RemoveContainer" containerID="57798b3b72be1a9fc0947b343f01a58ef09b7e0d0e56d758e01ed5c5003c9ceb" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.708255 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rxkm" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.712819 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74fjj\" (UniqueName: \"kubernetes.io/projected/b9acc6f5-a7ea-4640-b35d-014f831b262d-kube-api-access-74fjj\") pod \"b9acc6f5-a7ea-4640-b35d-014f831b262d\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.713101 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-utilities\") pod \"b9acc6f5-a7ea-4640-b35d-014f831b262d\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.713209 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-catalog-content\") pod \"b9acc6f5-a7ea-4640-b35d-014f831b262d\" (UID: \"b9acc6f5-a7ea-4640-b35d-014f831b262d\") " Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.714135 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-utilities" (OuterVolumeSpecName: "utilities") pod "b9acc6f5-a7ea-4640-b35d-014f831b262d" (UID: "b9acc6f5-a7ea-4640-b35d-014f831b262d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.718907 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9acc6f5-a7ea-4640-b35d-014f831b262d-kube-api-access-74fjj" (OuterVolumeSpecName: "kube-api-access-74fjj") pod "b9acc6f5-a7ea-4640-b35d-014f831b262d" (UID: "b9acc6f5-a7ea-4640-b35d-014f831b262d"). InnerVolumeSpecName "kube-api-access-74fjj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.726030 5099 scope.go:117] "RemoveContainer" containerID="d4612ed2327191bbd221c38395d0ef39155a0852fdc60468bd8adc428b0c08db" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.748921 5099 scope.go:117] "RemoveContainer" containerID="0c90461e952f1ed81818a648eed1a94a8fb18a95feada73147b0a351a77be4ac" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.815516 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-74fjj\" (UniqueName: \"kubernetes.io/projected/b9acc6f5-a7ea-4640-b35d-014f831b262d-kube-api-access-74fjj\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:18 crc kubenswrapper[5099]: I1212 15:25:18.815569 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.218967 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.278399 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.453275 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.453997 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.850610 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.851680 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.851811 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.852401 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"59a0c8863b379414ba5765f826a6f80d964d54c6affd9d9af768840cbfedd78d"} pod="openshift-console/downloads-747b44746d-87m2r" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.852518 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" containerID="cri-o://59a0c8863b379414ba5765f826a6f80d964d54c6affd9d9af768840cbfedd78d" gracePeriod=2 Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.856825 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:19 crc kubenswrapper[5099]: I1212 15:25:19.856909 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:21 crc kubenswrapper[5099]: I1212 15:25:21.853800 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tvbzr"] Dec 12 15:25:21 crc kubenswrapper[5099]: I1212 15:25:21.854529 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tvbzr" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="registry-server" containerID="cri-o://e43216e203a2ee40b136fe3c22f651a257db021d3447bd95472bb89d968ed3de" gracePeriod=2 Dec 12 15:25:22 crc kubenswrapper[5099]: I1212 15:25:22.117011 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9acc6f5-a7ea-4640-b35d-014f831b262d" (UID: "b9acc6f5-a7ea-4640-b35d-014f831b262d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:22 crc kubenswrapper[5099]: I1212 15:25:22.251706 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9acc6f5-a7ea-4640-b35d-014f831b262d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:22 crc kubenswrapper[5099]: I1212 15:25:22.338809 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rxkm"] Dec 12 15:25:22 crc kubenswrapper[5099]: I1212 15:25:22.342540 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8rxkm"] Dec 12 15:25:22 crc kubenswrapper[5099]: I1212 15:25:22.479381 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" path="/var/lib/kubelet/pods/b9acc6f5-a7ea-4640-b35d-014f831b262d/volumes" Dec 12 15:25:23 crc kubenswrapper[5099]: I1212 15:25:23.746760 5099 generic.go:358] "Generic (PLEG): container finished" podID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerID="59a0c8863b379414ba5765f826a6f80d964d54c6affd9d9af768840cbfedd78d" exitCode=0 Dec 12 15:25:23 crc kubenswrapper[5099]: I1212 15:25:23.746836 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-87m2r" event={"ID":"42d147e5-750e-4c46-bb7a-e99a34fca2f9","Type":"ContainerDied","Data":"59a0c8863b379414ba5765f826a6f80d964d54c6affd9d9af768840cbfedd78d"} Dec 12 15:25:23 crc kubenswrapper[5099]: I1212 15:25:23.747261 5099 scope.go:117] "RemoveContainer" containerID="73645a3c43c1ef82609189af3ee864f654ee678f549b521046d6c3e2ad5d1cf7" Dec 12 15:25:23 crc kubenswrapper[5099]: I1212 15:25:23.750086 5099 generic.go:358] "Generic (PLEG): container finished" podID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerID="e43216e203a2ee40b136fe3c22f651a257db021d3447bd95472bb89d968ed3de" exitCode=0 Dec 12 15:25:23 crc kubenswrapper[5099]: I1212 15:25:23.750166 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbzr" event={"ID":"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae","Type":"ContainerDied","Data":"e43216e203a2ee40b136fe3c22f651a257db021d3447bd95472bb89d968ed3de"} Dec 12 15:25:25 crc kubenswrapper[5099]: I1212 15:25:25.910825 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:25:25 crc kubenswrapper[5099]: I1212 15:25:25.996248 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-catalog-content\") pod \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " Dec 12 15:25:25 crc kubenswrapper[5099]: I1212 15:25:25.996432 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-utilities\") pod \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " Dec 12 15:25:25 crc kubenswrapper[5099]: I1212 15:25:25.996582 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggtqz\" (UniqueName: \"kubernetes.io/projected/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-kube-api-access-ggtqz\") pod \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\" (UID: \"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae\") " Dec 12 15:25:25 crc kubenswrapper[5099]: I1212 15:25:25.998277 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-utilities" (OuterVolumeSpecName: "utilities") pod "87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" (UID: "87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.005633 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-kube-api-access-ggtqz" (OuterVolumeSpecName: "kube-api-access-ggtqz") pod "87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" (UID: "87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae"). InnerVolumeSpecName "kube-api-access-ggtqz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.065149 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" (UID: "87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.098079 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ggtqz\" (UniqueName: \"kubernetes.io/projected/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-kube-api-access-ggtqz\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.098137 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.098156 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:26 crc kubenswrapper[5099]: E1212 15:25:26.143102 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice/crio-5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice\": RecentStats: unable to find data in memory cache]" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.782796 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tvbzr" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.782920 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbzr" event={"ID":"87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae","Type":"ContainerDied","Data":"3d5ec0a1179cc21951c11aa9d8ee99310bab2b0c55516a12a71f1e79bafcd5c9"} Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.783354 5099 scope.go:117] "RemoveContainer" containerID="e43216e203a2ee40b136fe3c22f651a257db021d3447bd95472bb89d968ed3de" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.804205 5099 scope.go:117] "RemoveContainer" containerID="e91d917f2ec80b121f7b03887822d6187a2bc1050f6f1d8daf2172589a411299" Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.804596 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tvbzr"] Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.807786 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tvbzr"] Dec 12 15:25:26 crc kubenswrapper[5099]: I1212 15:25:26.826697 5099 scope.go:117] "RemoveContainer" containerID="34d272199636b7e4436bbdbdcd8195803427324fc0c3d4271493e9620662ba4e" Dec 12 15:25:29 crc kubenswrapper[5099]: I1212 15:25:29.853328 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:29 crc kubenswrapper[5099]: I1212 15:25:29.853811 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:32 crc kubenswrapper[5099]: I1212 15:25:32.649260 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" path="/var/lib/kubelet/pods/87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae/volumes" Dec 12 15:25:36 crc kubenswrapper[5099]: E1212 15:25:36.470739 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice/crio-5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b\": RecentStats: unable to find data in memory cache]" Dec 12 15:25:38 crc kubenswrapper[5099]: I1212 15:25:38.256348 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259000 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259055 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259092 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259105 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259138 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerName="extract-utilities" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259151 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerName="extract-utilities" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259189 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259201 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259234 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="extract-utilities" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259246 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="extract-utilities" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259261 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="extract-content" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259272 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="extract-content" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259288 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259299 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259311 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259322 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259342 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="extract-utilities" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259356 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="extract-utilities" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259382 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d6634e23-c162-4979-935f-7737487537b4" containerName="pruner" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259405 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6634e23-c162-4979-935f-7737487537b4" containerName="pruner" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259425 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="extract-content" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259436 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="extract-content" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259456 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="extract-utilities" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259467 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="extract-utilities" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259484 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="extract-content" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259495 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="extract-content" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259510 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerName="extract-content" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259521 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerName="extract-content" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259746 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="87e8cc75-22ad-4cd9-afc2-0da7c49fe9ae" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259775 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="9fbc0f6e-a03e-414c-8f95-4bc036fac71b" containerName="kube-multus-additional-cni-plugins" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259789 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d6634e23-c162-4979-935f-7737487537b4" containerName="pruner" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259804 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="b9acc6f5-a7ea-4640-b35d-014f831b262d" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259822 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="a128b003-3b72-492a-af4a-a15e2f4f1c7a" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:38.259842 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="4a8a0d08-daaa-416f-b5b9-78c49ab92283" containerName="registry-server" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:39.864259 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:39.864625 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.209077 5099 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.209320 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.209382 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211002 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://ca3285329aa4c3479abd6f07bee49da719a2390ebd7853988f5b4976e7674ea8" gracePeriod=15 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211199 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0a14f83889283662ebe1bbd434233932edffd27b38750ab5edcb5866da81a2b8" gracePeriod=15 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211249 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e83bfe0224eb5c69f23132252eaba9bc3bd0dac62e3a19cf0262162b4c627b2e" gracePeriod=15 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211309 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a08390e968f9efeb59c99fd00241be16290395dce5767e17f0e950a1770db419" gracePeriod=15 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211371 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://261826a2e5b8909122df752fe1f4ad82d30626fd2d53ea720df5e71448d34d14" gracePeriod=15 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211401 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211426 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211439 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211445 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211459 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211464 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211473 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211480 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211495 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211501 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211514 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211522 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211531 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211537 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211545 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211551 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211713 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211738 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211745 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211751 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211758 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211764 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211772 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211778 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211783 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211944 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211957 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211973 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.211980 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.216286 5099 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.308792 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.309501 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.309653 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.309794 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.309853 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.314967 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: E1212 15:25:42.316137 5099 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.371460 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-87m2r" event={"ID":"42d147e5-750e-4c46-bb7a-e99a34fca2f9","Type":"ContainerStarted","Data":"6dcd0a6c21837e1170d8519d8e7ec63b6524081bb934e2215f7b02cff8365fa2"} Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412065 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412185 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412223 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412204 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412312 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412324 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412250 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412377 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412435 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.412478 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.514060 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.514139 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.514158 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.514184 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.514239 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.615050 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.615158 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.615220 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.615234 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.615271 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.615296 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.615350 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.615609 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.616430 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.616507 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.616880 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:42 crc kubenswrapper[5099]: W1212 15:25:42.642698 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-dfa80e205fd2236a2858198bdd437876efb6d74bff72c8bd05b7105f3e5ed9af WatchSource:0}: Error finding container dfa80e205fd2236a2858198bdd437876efb6d74bff72c8bd05b7105f3e5ed9af: Status 404 returned error can't find the container with id dfa80e205fd2236a2858198bdd437876efb6d74bff72c8bd05b7105f3e5ed9af Dec 12 15:25:42 crc kubenswrapper[5099]: E1212 15:25:42.646543 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880813a9b792106 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,LastTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.920656 5099 generic.go:358] "Generic (PLEG): container finished" podID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" containerID="94a27d2417f2326d9e22e718357765a366428b2bba7910a9a871cdff7c0da7cb" exitCode=0 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.920833 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f","Type":"ContainerDied","Data":"94a27d2417f2326d9e22e718357765a366428b2bba7910a9a871cdff7c0da7cb"} Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.923038 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.925596 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.927334 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.928185 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0a14f83889283662ebe1bbd434233932edffd27b38750ab5edcb5866da81a2b8" exitCode=0 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.928205 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e83bfe0224eb5c69f23132252eaba9bc3bd0dac62e3a19cf0262162b4c627b2e" exitCode=0 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.928212 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a08390e968f9efeb59c99fd00241be16290395dce5767e17f0e950a1770db419" exitCode=0 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.928218 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="261826a2e5b8909122df752fe1f4ad82d30626fd2d53ea720df5e71448d34d14" exitCode=2 Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.928340 5099 scope.go:117] "RemoveContainer" containerID="42055ac7253f09dc89a3a0f2595d9eb0f6d6bafc85f3c6817b9db1bfec066c57" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.932967 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"dfa80e205fd2236a2858198bdd437876efb6d74bff72c8bd05b7105f3e5ed9af"} Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.933459 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.933811 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.933862 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.934194 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:42 crc kubenswrapper[5099]: I1212 15:25:42.934715 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:43 crc kubenswrapper[5099]: E1212 15:25:43.936219 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880813a9b792106 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,LastTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:25:43 crc kubenswrapper[5099]: I1212 15:25:43.942883 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:25:43 crc kubenswrapper[5099]: I1212 15:25:43.945926 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"e0baaf5b411c448fa82d5c594b0bffefa15c9b0495514ab32bbdc68ecc0b034e"} Dec 12 15:25:43 crc kubenswrapper[5099]: I1212 15:25:43.946505 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:43 crc kubenswrapper[5099]: I1212 15:25:43.946795 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:43 crc kubenswrapper[5099]: E1212 15:25:43.947004 5099 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:43 crc kubenswrapper[5099]: I1212 15:25:43.947070 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:43 crc kubenswrapper[5099]: I1212 15:25:43.948205 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-87m2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 12 15:25:43 crc kubenswrapper[5099]: I1212 15:25:43.948340 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-87m2r" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 12 15:25:44 crc kubenswrapper[5099]: I1212 15:25:44.936947 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:25:44 crc kubenswrapper[5099]: I1212 15:25:44.944513 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:44 crc kubenswrapper[5099]: I1212 15:25:44.945091 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:44 crc kubenswrapper[5099]: I1212 15:25:44.957932 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:44 crc kubenswrapper[5099]: I1212 15:25:44.958407 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:25:44 crc kubenswrapper[5099]: I1212 15:25:44.958529 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f","Type":"ContainerDied","Data":"68581b6301e7c97b89bbebd2b213a5d88d4f6c34854dc32e4a696a26a375990c"} Dec 12 15:25:44 crc kubenswrapper[5099]: I1212 15:25:44.958565 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68581b6301e7c97b89bbebd2b213a5d88d4f6c34854dc32e4a696a26a375990c" Dec 12 15:25:44 crc kubenswrapper[5099]: E1212 15:25:44.958686 5099 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.333259 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-var-lock\") pod \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.333694 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kubelet-dir\") pod \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.333870 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kube-api-access\") pod \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\" (UID: \"a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f\") " Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.333502 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-var-lock" (OuterVolumeSpecName: "var-lock") pod "a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" (UID: "a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.333885 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" (UID: "a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.347811 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" (UID: "a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.440277 5099 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.440348 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.440376 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.718276 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.718796 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.970362 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:25:45 crc kubenswrapper[5099]: I1212 15:25:45.972752 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ca3285329aa4c3479abd6f07bee49da719a2390ebd7853988f5b4976e7674ea8" exitCode=0 Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.610108 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.610199 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.618417 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.619073 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"083cdafdeff5eefadf2c78beacb4a231fefe181de777a1665ddc767a6f089e14"} pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.619152 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" containerID="cri-o://083cdafdeff5eefadf2c78beacb4a231fefe181de777a1665ddc767a6f089e14" gracePeriod=600 Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.661311 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.662273 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.663107 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.663541 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.663913 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:46 crc kubenswrapper[5099]: E1212 15:25:46.739640 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice/crio-5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice\": RecentStats: unable to find data in memory cache]" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.786271 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.786365 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.786381 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.786403 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.786426 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.786487 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.786526 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.786564 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.787262 5099 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.787284 5099 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.787294 5099 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.788075 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.789260 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.940080 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.940419 5099 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.979894 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.980773 5099 scope.go:117] "RemoveContainer" containerID="0a14f83889283662ebe1bbd434233932edffd27b38750ab5edcb5866da81a2b8" Dec 12 15:25:46 crc kubenswrapper[5099]: I1212 15:25:46.980839 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:47 crc kubenswrapper[5099]: I1212 15:25:47.004838 5099 scope.go:117] "RemoveContainer" containerID="e83bfe0224eb5c69f23132252eaba9bc3bd0dac62e3a19cf0262162b4c627b2e" Dec 12 15:25:47 crc kubenswrapper[5099]: I1212 15:25:47.008182 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:47 crc kubenswrapper[5099]: I1212 15:25:47.008436 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:47 crc kubenswrapper[5099]: I1212 15:25:47.008650 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:47 crc kubenswrapper[5099]: I1212 15:25:47.022944 5099 scope.go:117] "RemoveContainer" containerID="a08390e968f9efeb59c99fd00241be16290395dce5767e17f0e950a1770db419" Dec 12 15:25:47 crc kubenswrapper[5099]: I1212 15:25:47.038954 5099 scope.go:117] "RemoveContainer" containerID="261826a2e5b8909122df752fe1f4ad82d30626fd2d53ea720df5e71448d34d14" Dec 12 15:25:47 crc kubenswrapper[5099]: I1212 15:25:47.054463 5099 scope.go:117] "RemoveContainer" containerID="ca3285329aa4c3479abd6f07bee49da719a2390ebd7853988f5b4976e7674ea8" Dec 12 15:25:47 crc kubenswrapper[5099]: I1212 15:25:47.070719 5099 scope.go:117] "RemoveContainer" containerID="2fc9a00e37b1547b4a00b0a5818ba6fd62e1622ac01a848616d6ea2cb5ae35ac" Dec 12 15:25:48 crc kubenswrapper[5099]: I1212 15:25:48.469091 5099 generic.go:358] "Generic (PLEG): container finished" podID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerID="083cdafdeff5eefadf2c78beacb4a231fefe181de777a1665ddc767a6f089e14" exitCode=0 Dec 12 15:25:48 crc kubenswrapper[5099]: I1212 15:25:48.486281 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 12 15:25:48 crc kubenswrapper[5099]: I1212 15:25:48.489709 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerDied","Data":"083cdafdeff5eefadf2c78beacb4a231fefe181de777a1665ddc767a6f089e14"} Dec 12 15:25:49 crc kubenswrapper[5099]: E1212 15:25:49.446613 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:49 crc kubenswrapper[5099]: E1212 15:25:49.447327 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:49 crc kubenswrapper[5099]: E1212 15:25:49.447795 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:49 crc kubenswrapper[5099]: E1212 15:25:49.448173 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:49 crc kubenswrapper[5099]: E1212 15:25:49.448654 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:49 crc kubenswrapper[5099]: I1212 15:25:49.448706 5099 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 15:25:49 crc kubenswrapper[5099]: E1212 15:25:49.449011 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="200ms" Dec 12 15:25:49 crc kubenswrapper[5099]: E1212 15:25:49.649918 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="400ms" Dec 12 15:25:50 crc kubenswrapper[5099]: E1212 15:25:50.050891 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="800ms" Dec 12 15:25:50 crc kubenswrapper[5099]: E1212 15:25:50.945404 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="1.6s" Dec 12 15:25:52 crc kubenswrapper[5099]: I1212 15:25:52.471909 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:52 crc kubenswrapper[5099]: I1212 15:25:52.472736 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:52 crc kubenswrapper[5099]: I1212 15:25:52.504453 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"0fdeb729c4f065f4c9cb140219a15931967f589a6d5c6c791404fed72f77f20b"} Dec 12 15:25:52 crc kubenswrapper[5099]: E1212 15:25:52.548252 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="3.2s" Dec 12 15:25:52 crc kubenswrapper[5099]: E1212 15:25:52.572124 5099 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" volumeName="registry-storage" Dec 12 15:25:53 crc kubenswrapper[5099]: I1212 15:25:53.510368 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:53 crc kubenswrapper[5099]: I1212 15:25:53.510639 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:53 crc kubenswrapper[5099]: I1212 15:25:53.510887 5099 status_manager.go:895] "Failed to get status for pod" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qwqjz\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:54 crc kubenswrapper[5099]: E1212 15:25:54.238695 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880813a9b792106 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,LastTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.740166 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.746895 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.747296 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.747567 5099 status_manager.go:895] "Failed to get status for pod" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qwqjz\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.803159 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-87m2r" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.807318 5099 status_manager.go:895] "Failed to get status for pod" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qwqjz\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.807625 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.808191 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.810748 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0710a3ca-a09c-439d-97ba-3f61e859fc53" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.810786 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0710a3ca-a09c-439d-97ba-3f61e859fc53" Dec 12 15:25:54 crc kubenswrapper[5099]: E1212 15:25:54.811092 5099 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:54 crc kubenswrapper[5099]: I1212 15:25:54.811652 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:25:55 crc kubenswrapper[5099]: E1212 15:25:55.750428 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="6.4s" Dec 12 15:25:55 crc kubenswrapper[5099]: I1212 15:25:55.811723 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c26309335b0f913cb25c367c29bd6e2570f550dd6507a94b1b018b5c382264d8"} Dec 12 15:25:56 crc kubenswrapper[5099]: E1212 15:25:56.856990 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice/crio-5b66904c20f9c6c63b0262a899112143430156a940d86b47774e9a8ac5a90c2b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd6634e23_c162_4979_935f_7737487537b4.slice\": RecentStats: unable to find data in memory cache]" Dec 12 15:25:59 crc kubenswrapper[5099]: I1212 15:25:59.545003 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 15:25:59 crc kubenswrapper[5099]: I1212 15:25:59.546543 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 15:26:00 crc kubenswrapper[5099]: E1212 15:26:00.652163 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:26:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:26:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:26:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:26:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:00 crc kubenswrapper[5099]: E1212 15:26:00.653215 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:00 crc kubenswrapper[5099]: E1212 15:26:00.653803 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:00 crc kubenswrapper[5099]: E1212 15:26:00.654653 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:00 crc kubenswrapper[5099]: E1212 15:26:00.655274 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:00 crc kubenswrapper[5099]: E1212 15:26:00.655308 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:26:02 crc kubenswrapper[5099]: E1212 15:26:02.152197 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="7s" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.476038 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.476969 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.477519 5099 status_manager.go:895] "Failed to get status for pod" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qwqjz\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.478150 5099 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.600463 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.602065 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.609921 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.610010 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.862097 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.862418 5099 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="16aff26b4dfdededb7035f2088c1478159d9bc5ea17e8c5d497fb895944d4da9" exitCode=1 Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.862484 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"16aff26b4dfdededb7035f2088c1478159d9bc5ea17e8c5d497fb895944d4da9"} Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.863292 5099 scope.go:117] "RemoveContainer" containerID="16aff26b4dfdededb7035f2088c1478159d9bc5ea17e8c5d497fb895944d4da9" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.865788 5099 status_manager.go:895] "Failed to get status for pod" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qwqjz\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.866280 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.866777 5099 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.867885 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:02 crc kubenswrapper[5099]: I1212 15:26:02.868704 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:03 crc kubenswrapper[5099]: I1212 15:26:03.878846 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:26:03 crc kubenswrapper[5099]: I1212 15:26:03.879492 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d8f6227f4df671c2634a425ade027a8ed48567480960b2fc3efa7c1783e1d468"} Dec 12 15:26:03 crc kubenswrapper[5099]: I1212 15:26:03.883151 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ee8d1703f69d9199f2ac5404272d66c9465ed4795fd625a936d39631a2e49f07"} Dec 12 15:26:04 crc kubenswrapper[5099]: E1212 15:26:04.239947 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880813a9b792106 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,LastTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:26:06 crc kubenswrapper[5099]: I1212 15:26:06.911773 5099 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="ee8d1703f69d9199f2ac5404272d66c9465ed4795fd625a936d39631a2e49f07" exitCode=0 Dec 12 15:26:06 crc kubenswrapper[5099]: I1212 15:26:06.912353 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"ee8d1703f69d9199f2ac5404272d66c9465ed4795fd625a936d39631a2e49f07"} Dec 12 15:26:07 crc kubenswrapper[5099]: I1212 15:26:07.917872 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0710a3ca-a09c-439d-97ba-3f61e859fc53" Dec 12 15:26:07 crc kubenswrapper[5099]: I1212 15:26:07.917916 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0710a3ca-a09c-439d-97ba-3f61e859fc53" Dec 12 15:26:07 crc kubenswrapper[5099]: I1212 15:26:07.918511 5099 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:07 crc kubenswrapper[5099]: E1212 15:26:07.918532 5099 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:26:07 crc kubenswrapper[5099]: I1212 15:26:07.918980 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:07 crc kubenswrapper[5099]: I1212 15:26:07.919269 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:07 crc kubenswrapper[5099]: I1212 15:26:07.919560 5099 status_manager.go:895] "Failed to get status for pod" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qwqjz\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:07 crc kubenswrapper[5099]: I1212 15:26:07.920299 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:07 crc kubenswrapper[5099]: I1212 15:26:07.920904 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:26:08 crc kubenswrapper[5099]: I1212 15:26:08.924493 5099 status_manager.go:895] "Failed to get status for pod" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qwqjz\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:08 crc kubenswrapper[5099]: I1212 15:26:08.925300 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:08 crc kubenswrapper[5099]: I1212 15:26:08.925638 5099 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:08 crc kubenswrapper[5099]: I1212 15:26:08.926017 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:08 crc kubenswrapper[5099]: I1212 15:26:08.926272 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:09 crc kubenswrapper[5099]: E1212 15:26:09.154400 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="7s" Dec 12 15:26:10 crc kubenswrapper[5099]: I1212 15:26:10.587129 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:26:10 crc kubenswrapper[5099]: I1212 15:26:10.587130 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 15:26:10 crc kubenswrapper[5099]: I1212 15:26:10.587278 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 15:26:12 crc kubenswrapper[5099]: I1212 15:26:12.475290 5099 status_manager.go:895] "Failed to get status for pod" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:12 crc kubenswrapper[5099]: I1212 15:26:12.475642 5099 status_manager.go:895] "Failed to get status for pod" podUID="42d147e5-750e-4c46-bb7a-e99a34fca2f9" pod="openshift-console/downloads-747b44746d-87m2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-87m2r\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:12 crc kubenswrapper[5099]: I1212 15:26:12.476052 5099 status_manager.go:895] "Failed to get status for pod" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qwqjz\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:12 crc kubenswrapper[5099]: I1212 15:26:12.476389 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:12 crc kubenswrapper[5099]: I1212 15:26:12.476607 5099 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Dec 12 15:26:12 crc kubenswrapper[5099]: I1212 15:26:12.609194 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:26:14 crc kubenswrapper[5099]: E1212 15:26:14.240602 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880813a9b792106 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,LastTimestamp:2025-12-12 15:25:42.645727494 +0000 UTC m=+280.749636145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:26:15 crc kubenswrapper[5099]: I1212 15:26:15.161857 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"bda5f07cfd2650c0c30526e28e1f0e9e92275556cc5602ad121a717b43d2f9c7"} Dec 12 15:26:19 crc kubenswrapper[5099]: I1212 15:26:19.494243 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0c74497a747cc6e94e7e148186ecaccf6642ed7e1ede8fb1527d4df874ca3d41"} Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.505928 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"8f8cfbec7bb8a58184ae0c4e79432c6b15e04b6bf69c9c8f72f1174f0a7dca84"} Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.506364 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.506384 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"75e5bc4c99e346360d62bc77ae3cebe6dd7aecd00bf107841441c74788bfcefc"} Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.506396 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d2f052e351a9a0022af0b3573eaaad1b94b6182d91355979dca28c1bee6729f6"} Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.507272 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0710a3ca-a09c-439d-97ba-3f61e859fc53" Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.507315 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0710a3ca-a09c-439d-97ba-3f61e859fc53" Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.514693 5099 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.514724 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.587404 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.587467 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 15:26:20 crc kubenswrapper[5099]: I1212 15:26:20.590683 5099 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="7b9d2851-6dab-4234-9d62-893717d78d01" Dec 12 15:26:21 crc kubenswrapper[5099]: I1212 15:26:21.514118 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0710a3ca-a09c-439d-97ba-3f61e859fc53" Dec 12 15:26:21 crc kubenswrapper[5099]: I1212 15:26:21.514178 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0710a3ca-a09c-439d-97ba-3f61e859fc53" Dec 12 15:26:21 crc kubenswrapper[5099]: I1212 15:26:21.519150 5099 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="7b9d2851-6dab-4234-9d62-893717d78d01" Dec 12 15:26:23 crc kubenswrapper[5099]: I1212 15:26:23.527278 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:26:23 crc kubenswrapper[5099]: I1212 15:26:23.527955 5099 generic.go:358] "Generic (PLEG): container finished" podID="fc4541ce-7789-4670-bc75-5c2868e52ce0" containerID="fcd1a3f89c6463b0d2003c333c37ae385286383d9d4d01f4e8e61f5e6bac9923" exitCode=1 Dec 12 15:26:23 crc kubenswrapper[5099]: I1212 15:26:23.528056 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerDied","Data":"fcd1a3f89c6463b0d2003c333c37ae385286383d9d4d01f4e8e61f5e6bac9923"} Dec 12 15:26:23 crc kubenswrapper[5099]: I1212 15:26:23.528681 5099 scope.go:117] "RemoveContainer" containerID="fcd1a3f89c6463b0d2003c333c37ae385286383d9d4d01f4e8e61f5e6bac9923" Dec 12 15:26:24 crc kubenswrapper[5099]: I1212 15:26:24.536517 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:26:24 crc kubenswrapper[5099]: I1212 15:26:24.536932 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"36564202d9c79774d81a057bd41fba944d68fbc32ec5b65c82fb87408ca0f107"} Dec 12 15:26:30 crc kubenswrapper[5099]: I1212 15:26:30.587080 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 15:26:30 crc kubenswrapper[5099]: I1212 15:26:30.587829 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 15:26:30 crc kubenswrapper[5099]: I1212 15:26:30.587932 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:26:30 crc kubenswrapper[5099]: I1212 15:26:30.588847 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"d8f6227f4df671c2634a425ade027a8ed48567480960b2fc3efa7c1783e1d468"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 12 15:26:30 crc kubenswrapper[5099]: I1212 15:26:30.588989 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://d8f6227f4df671c2634a425ade027a8ed48567480960b2fc3efa7c1783e1d468" gracePeriod=30 Dec 12 15:26:44 crc kubenswrapper[5099]: I1212 15:26:44.680435 5099 generic.go:358] "Generic (PLEG): container finished" podID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerID="a7f5aa33a7ac8745337142697b31b6427e49712f17ae308468a66abf8bdbb247" exitCode=0 Dec 12 15:26:44 crc kubenswrapper[5099]: I1212 15:26:44.680521 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" event={"ID":"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d","Type":"ContainerDied","Data":"a7f5aa33a7ac8745337142697b31b6427e49712f17ae308468a66abf8bdbb247"} Dec 12 15:26:44 crc kubenswrapper[5099]: I1212 15:26:44.681645 5099 scope.go:117] "RemoveContainer" containerID="a7f5aa33a7ac8745337142697b31b6427e49712f17ae308468a66abf8bdbb247" Dec 12 15:26:44 crc kubenswrapper[5099]: I1212 15:26:44.772989 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 15:26:45 crc kubenswrapper[5099]: I1212 15:26:45.648686 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:26:45 crc kubenswrapper[5099]: I1212 15:26:45.690553 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-fmnlp_dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/marketplace-operator/1.log" Dec 12 15:26:45 crc kubenswrapper[5099]: I1212 15:26:45.691096 5099 generic.go:358] "Generic (PLEG): container finished" podID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerID="f98036c77cb55a6b168f299c8e22936b0d902eb9c408debdfa6931c572c53d67" exitCode=1 Dec 12 15:26:45 crc kubenswrapper[5099]: I1212 15:26:45.691271 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" event={"ID":"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d","Type":"ContainerDied","Data":"f98036c77cb55a6b168f299c8e22936b0d902eb9c408debdfa6931c572c53d67"} Dec 12 15:26:45 crc kubenswrapper[5099]: I1212 15:26:45.691621 5099 scope.go:117] "RemoveContainer" containerID="a7f5aa33a7ac8745337142697b31b6427e49712f17ae308468a66abf8bdbb247" Dec 12 15:26:45 crc kubenswrapper[5099]: I1212 15:26:45.691756 5099 scope.go:117] "RemoveContainer" containerID="f98036c77cb55a6b168f299c8e22936b0d902eb9c408debdfa6931c572c53d67" Dec 12 15:26:45 crc kubenswrapper[5099]: E1212 15:26:45.692212 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-fmnlp_openshift-marketplace(dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" Dec 12 15:26:46 crc kubenswrapper[5099]: I1212 15:26:46.700254 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-fmnlp_dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/marketplace-operator/1.log" Dec 12 15:26:46 crc kubenswrapper[5099]: I1212 15:26:46.700892 5099 scope.go:117] "RemoveContainer" containerID="f98036c77cb55a6b168f299c8e22936b0d902eb9c408debdfa6931c572c53d67" Dec 12 15:26:46 crc kubenswrapper[5099]: E1212 15:26:46.701148 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-fmnlp_openshift-marketplace(dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" Dec 12 15:26:47 crc kubenswrapper[5099]: I1212 15:26:47.158765 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 15:26:47 crc kubenswrapper[5099]: I1212 15:26:47.340856 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 15:26:47 crc kubenswrapper[5099]: I1212 15:26:47.639206 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:26:47 crc kubenswrapper[5099]: I1212 15:26:47.903068 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 15:26:48 crc kubenswrapper[5099]: I1212 15:26:48.035799 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 15:26:48 crc kubenswrapper[5099]: I1212 15:26:48.068493 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 15:26:48 crc kubenswrapper[5099]: I1212 15:26:48.432189 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 15:26:48 crc kubenswrapper[5099]: I1212 15:26:48.949996 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.022132 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.057911 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.107915 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.274405 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.285832 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.286131 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.296306 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.557599 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.681740 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.716223 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.965132 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:26:49 crc kubenswrapper[5099]: I1212 15:26:49.967143 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:26:50 crc kubenswrapper[5099]: I1212 15:26:50.353414 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:26:50 crc kubenswrapper[5099]: I1212 15:26:50.355628 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 15:26:50 crc kubenswrapper[5099]: I1212 15:26:50.524548 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 15:26:50 crc kubenswrapper[5099]: I1212 15:26:50.581845 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 15:26:50 crc kubenswrapper[5099]: I1212 15:26:50.897333 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 15:26:50 crc kubenswrapper[5099]: I1212 15:26:50.904646 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 15:26:51 crc kubenswrapper[5099]: I1212 15:26:51.012701 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 15:26:51 crc kubenswrapper[5099]: I1212 15:26:51.119464 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 15:26:51 crc kubenswrapper[5099]: I1212 15:26:51.229718 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 15:26:51 crc kubenswrapper[5099]: I1212 15:26:51.347009 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 15:26:51 crc kubenswrapper[5099]: I1212 15:26:51.573092 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 15:26:51 crc kubenswrapper[5099]: I1212 15:26:51.583957 5099 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:26:51 crc kubenswrapper[5099]: I1212 15:26:51.586251 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 15:26:51 crc kubenswrapper[5099]: I1212 15:26:51.836516 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 15:26:52 crc kubenswrapper[5099]: I1212 15:26:52.002167 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 15:26:52 crc kubenswrapper[5099]: I1212 15:26:52.032147 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 15:26:52 crc kubenswrapper[5099]: I1212 15:26:52.484966 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 15:26:52 crc kubenswrapper[5099]: I1212 15:26:52.499134 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 15:26:52 crc kubenswrapper[5099]: I1212 15:26:52.558627 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 15:26:52 crc kubenswrapper[5099]: I1212 15:26:52.730827 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:26:52 crc kubenswrapper[5099]: I1212 15:26:52.731696 5099 scope.go:117] "RemoveContainer" containerID="f98036c77cb55a6b168f299c8e22936b0d902eb9c408debdfa6931c572c53d67" Dec 12 15:26:52 crc kubenswrapper[5099]: E1212 15:26:52.732123 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-fmnlp_openshift-marketplace(dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" Dec 12 15:26:52 crc kubenswrapper[5099]: I1212 15:26:52.756627 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 15:26:53 crc kubenswrapper[5099]: I1212 15:26:53.035707 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 15:26:53 crc kubenswrapper[5099]: I1212 15:26:53.125477 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 15:26:53 crc kubenswrapper[5099]: I1212 15:26:53.154496 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 15:26:53 crc kubenswrapper[5099]: I1212 15:26:53.270080 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 15:26:53 crc kubenswrapper[5099]: I1212 15:26:53.417777 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 15:26:53 crc kubenswrapper[5099]: I1212 15:26:53.791123 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 15:26:53 crc kubenswrapper[5099]: I1212 15:26:53.905258 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.029182 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.044559 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.052877 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.184677 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.274487 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.274697 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.344413 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.497910 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.585031 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.701034 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.711870 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.749961 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.766401 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.806920 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.947554 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 15:26:54 crc kubenswrapper[5099]: I1212 15:26:54.990875 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 15:26:55 crc kubenswrapper[5099]: I1212 15:26:55.033068 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 15:26:55 crc kubenswrapper[5099]: I1212 15:26:55.282522 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 15:26:55 crc kubenswrapper[5099]: I1212 15:26:55.590913 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 15:26:55 crc kubenswrapper[5099]: I1212 15:26:55.647572 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:26:55 crc kubenswrapper[5099]: I1212 15:26:55.648206 5099 scope.go:117] "RemoveContainer" containerID="f98036c77cb55a6b168f299c8e22936b0d902eb9c408debdfa6931c572c53d67" Dec 12 15:26:55 crc kubenswrapper[5099]: I1212 15:26:55.708838 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 15:26:55 crc kubenswrapper[5099]: I1212 15:26:55.755030 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.143130 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.147290 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.169152 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.388387 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.478388 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.518314 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.666400 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.679898 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.759074 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-fmnlp_dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/marketplace-operator/2.log" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.760015 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-fmnlp_dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/marketplace-operator/1.log" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.760058 5099 generic.go:358] "Generic (PLEG): container finished" podID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerID="430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c" exitCode=1 Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.760170 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" event={"ID":"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d","Type":"ContainerDied","Data":"430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c"} Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.760223 5099 scope.go:117] "RemoveContainer" containerID="f98036c77cb55a6b168f299c8e22936b0d902eb9c408debdfa6931c572c53d67" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.760884 5099 scope.go:117] "RemoveContainer" containerID="430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c" Dec 12 15:26:56 crc kubenswrapper[5099]: E1212 15:26:56.761191 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-fmnlp_openshift-marketplace(dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.838266 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.872826 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 15:26:56 crc kubenswrapper[5099]: I1212 15:26:56.986268 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.047370 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.103962 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.155635 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.188813 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.234482 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.509974 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.520934 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.669280 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.767489 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-fmnlp_dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/marketplace-operator/2.log" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.776621 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.799569 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.821852 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.858151 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.973857 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.987314 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 15:26:57 crc kubenswrapper[5099]: I1212 15:26:57.993856 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 15:26:58 crc kubenswrapper[5099]: I1212 15:26:58.448054 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 15:26:58 crc kubenswrapper[5099]: I1212 15:26:58.550231 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.037831 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.112722 5099 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.157390 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.190490 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.397102 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.471871 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.627747 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.678198 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.944029 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 15:26:59 crc kubenswrapper[5099]: I1212 15:26:59.949295 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.001307 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.148371 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.219139 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.251185 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.276910 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.330380 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.383449 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.383742 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.830472 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.830599 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.830780 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.830901 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.830986 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.930287 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.946205 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.958324 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.960508 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.960565 5099 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="d8f6227f4df671c2634a425ade027a8ed48567480960b2fc3efa7c1783e1d468" exitCode=137 Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.960622 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"d8f6227f4df671c2634a425ade027a8ed48567480960b2fc3efa7c1783e1d468"} Dec 12 15:27:00 crc kubenswrapper[5099]: I1212 15:27:00.960755 5099 scope.go:117] "RemoveContainer" containerID="16aff26b4dfdededb7035f2088c1478159d9bc5ea17e8c5d497fb895944d4da9" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.202290 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.202612 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.212898 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.287970 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.345815 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.481007 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.670444 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.903316 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.906403 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.966079 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.970542 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.971691 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3bf79d1ee90e39fd324519f64bed3b59395ef9fac74698cc0d21484050610222"} Dec 12 15:27:01 crc kubenswrapper[5099]: I1212 15:27:01.979776 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:02 crc kubenswrapper[5099]: I1212 15:27:02.022038 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 15:27:02 crc kubenswrapper[5099]: I1212 15:27:02.117645 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 15:27:02 crc kubenswrapper[5099]: I1212 15:27:02.552434 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:02 crc kubenswrapper[5099]: I1212 15:27:02.599479 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 15:27:02 crc kubenswrapper[5099]: I1212 15:27:02.608409 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:27:02 crc kubenswrapper[5099]: I1212 15:27:02.717704 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 15:27:02 crc kubenswrapper[5099]: I1212 15:27:02.731594 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:27:02 crc kubenswrapper[5099]: I1212 15:27:02.732357 5099 scope.go:117] "RemoveContainer" containerID="430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c" Dec 12 15:27:02 crc kubenswrapper[5099]: E1212 15:27:02.732655 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-fmnlp_openshift-marketplace(dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.040384 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.066414 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.187013 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.247854 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.337546 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.365516 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.629847 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.746941 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.902517 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 15:27:03 crc kubenswrapper[5099]: I1212 15:27:03.971014 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.019409 5099 ???:1] "http: TLS handshake error from 192.168.126.11:37126: no serving certificate available for the kubelet" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.136810 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.188936 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.215493 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.293283 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.328711 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.545517 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.982562 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 15:27:04 crc kubenswrapper[5099]: I1212 15:27:04.985222 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.005917 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.198620 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.293612 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.428151 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.466634 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.905035 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.906395 5099 scope.go:117] "RemoveContainer" containerID="430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.906547 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 15:27:05 crc kubenswrapper[5099]: E1212 15:27:05.907102 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-fmnlp_openshift-marketplace(dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" Dec 12 15:27:05 crc kubenswrapper[5099]: I1212 15:27:05.997855 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.120079 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.454463 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.469143 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.762489 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.763275 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.763648 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.765685 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.831735 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.890915 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.891914 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.939080 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 15:27:06 crc kubenswrapper[5099]: I1212 15:27:06.939548 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.077602 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.159089 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.237753 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.489524 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.516455 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.667202 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.671710 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.850370 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.904054 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 15:27:07 crc kubenswrapper[5099]: I1212 15:27:07.937209 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.081755 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.157597 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.337888 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.360393 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.461120 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.528810 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.677821 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.787888 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.919559 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:08 crc kubenswrapper[5099]: I1212 15:27:08.929935 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.103529 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.176846 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.297778 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.353060 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.358447 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.361151 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.392359 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.448987 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.704580 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 15:27:09 crc kubenswrapper[5099]: I1212 15:27:09.872807 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.023368 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.087538 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.107515 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.197753 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.329208 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.585431 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.586539 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.593853 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.625528 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 15:27:10 crc kubenswrapper[5099]: I1212 15:27:10.874151 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 15:27:11 crc kubenswrapper[5099]: I1212 15:27:11.043695 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:27:11 crc kubenswrapper[5099]: I1212 15:27:11.119156 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 15:27:11 crc kubenswrapper[5099]: I1212 15:27:11.143760 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:27:11 crc kubenswrapper[5099]: I1212 15:27:11.403830 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 15:27:11 crc kubenswrapper[5099]: I1212 15:27:11.660468 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:11 crc kubenswrapper[5099]: I1212 15:27:11.811272 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 15:27:11 crc kubenswrapper[5099]: I1212 15:27:11.813815 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 15:27:12 crc kubenswrapper[5099]: I1212 15:27:12.504599 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:27:12 crc kubenswrapper[5099]: I1212 15:27:12.593736 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 15:27:12 crc kubenswrapper[5099]: I1212 15:27:12.703053 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 15:27:12 crc kubenswrapper[5099]: I1212 15:27:12.825333 5099 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:27:12 crc kubenswrapper[5099]: I1212 15:27:12.967980 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 15:27:13 crc kubenswrapper[5099]: I1212 15:27:13.123405 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 15:27:13 crc kubenswrapper[5099]: I1212 15:27:13.172854 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 15:27:13 crc kubenswrapper[5099]: I1212 15:27:13.201744 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 15:27:13 crc kubenswrapper[5099]: I1212 15:27:13.341999 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 15:27:13 crc kubenswrapper[5099]: I1212 15:27:13.445217 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 15:27:13 crc kubenswrapper[5099]: I1212 15:27:13.589291 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 15:27:13 crc kubenswrapper[5099]: I1212 15:27:13.871310 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 15:27:13 crc kubenswrapper[5099]: I1212 15:27:13.885996 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 15:27:14 crc kubenswrapper[5099]: I1212 15:27:14.081110 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 15:27:14 crc kubenswrapper[5099]: I1212 15:27:14.111207 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 15:27:14 crc kubenswrapper[5099]: I1212 15:27:14.431187 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 15:27:15 crc kubenswrapper[5099]: I1212 15:27:15.055838 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 15:27:15 crc kubenswrapper[5099]: I1212 15:27:15.092304 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 15:27:15 crc kubenswrapper[5099]: I1212 15:27:15.148382 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 15:27:15 crc kubenswrapper[5099]: I1212 15:27:15.228117 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 15:27:15 crc kubenswrapper[5099]: I1212 15:27:15.258087 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 15:27:15 crc kubenswrapper[5099]: I1212 15:27:15.680425 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 15:27:15 crc kubenswrapper[5099]: I1212 15:27:15.696497 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 15:27:15 crc kubenswrapper[5099]: I1212 15:27:15.836458 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 15:27:16 crc kubenswrapper[5099]: I1212 15:27:16.038069 5099 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:27:16 crc kubenswrapper[5099]: I1212 15:27:16.883924 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 15:27:17 crc kubenswrapper[5099]: I1212 15:27:17.178074 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 15:27:17 crc kubenswrapper[5099]: I1212 15:27:17.308463 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 15:27:18 crc kubenswrapper[5099]: I1212 15:27:18.149782 5099 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 15:27:18 crc kubenswrapper[5099]: I1212 15:27:18.160251 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:27:18 crc kubenswrapper[5099]: I1212 15:27:18.160338 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:27:18 crc kubenswrapper[5099]: I1212 15:27:18.169884 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:27:18 crc kubenswrapper[5099]: I1212 15:27:18.219887 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=13.219827691 podStartE2EDuration="13.219827691s" podCreationTimestamp="2025-12-12 15:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:27:18.183320921 +0000 UTC m=+376.287229582" watchObservedRunningTime="2025-12-12 15:27:18.219827691 +0000 UTC m=+376.323736362" Dec 12 15:27:18 crc kubenswrapper[5099]: I1212 15:27:18.222216 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=58.222203563 podStartE2EDuration="58.222203563s" podCreationTimestamp="2025-12-12 15:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:27:18.21092702 +0000 UTC m=+376.314835671" watchObservedRunningTime="2025-12-12 15:27:18.222203563 +0000 UTC m=+376.326112204" Dec 12 15:27:19 crc kubenswrapper[5099]: I1212 15:27:19.514278 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 15:27:19 crc kubenswrapper[5099]: I1212 15:27:19.813108 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:27:19 crc kubenswrapper[5099]: I1212 15:27:19.813184 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:27:19 crc kubenswrapper[5099]: I1212 15:27:19.819480 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:27:20 crc kubenswrapper[5099]: I1212 15:27:20.157584 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:27:20 crc kubenswrapper[5099]: I1212 15:27:20.467259 5099 scope.go:117] "RemoveContainer" containerID="430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.176454 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-fmnlp_dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/marketplace-operator/2.log" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.177794 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" event={"ID":"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d","Type":"ContainerStarted","Data":"f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148"} Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.180055 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.200291 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.503974 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758f4758b8-59zs8"] Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.504305 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" podUID="192cc6bf-9cda-439b-a3ba-197e6eefb3a0" containerName="controller-manager" containerID="cri-o://ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d" gracePeriod=30 Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.530653 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf"] Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.531023 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" podUID="06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" containerName="route-controller-manager" containerID="cri-o://7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319" gracePeriod=30 Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.932032 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.942334 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.970282 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c98797868-46glh"] Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.970949 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" containerName="route-controller-manager" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.970979 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" containerName="route-controller-manager" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.970994 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" containerName="installer" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.971000 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" containerName="installer" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.971010 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="192cc6bf-9cda-439b-a3ba-197e6eefb3a0" containerName="controller-manager" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.971018 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="192cc6bf-9cda-439b-a3ba-197e6eefb3a0" containerName="controller-manager" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.971147 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" containerName="route-controller-manager" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.971166 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="192cc6bf-9cda-439b-a3ba-197e6eefb3a0" containerName="controller-manager" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.971175 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9ea18c6-09a8-4ec2-89e1-d9d41dc5b49f" containerName="installer" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.975758 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982114 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-serving-cert\") pod \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982171 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8jkj\" (UniqueName: \"kubernetes.io/projected/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-kube-api-access-h8jkj\") pod \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982203 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-config\") pod \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982299 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-serving-cert\") pod \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982330 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-proxy-ca-bundles\") pod \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982380 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-tmp\") pod \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982406 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-client-ca\") pod \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\" (UID: \"192cc6bf-9cda-439b-a3ba-197e6eefb3a0\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982429 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-client-ca\") pod \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982475 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-tmp\") pod \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982604 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkcd9\" (UniqueName: \"kubernetes.io/projected/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-kube-api-access-nkcd9\") pod \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982638 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-config\") pod \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\" (UID: \"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a\") " Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982824 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-config\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982945 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-client-ca\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.982974 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/228ff066-39c5-45ff-99b4-fc91f91f5aec-tmp\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.983025 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228ff066-39c5-45ff-99b4-fc91f91f5aec-serving-cert\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.983075 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcc5\" (UniqueName: \"kubernetes.io/projected/228ff066-39c5-45ff-99b4-fc91f91f5aec-kube-api-access-xvcc5\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.984193 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-client-ca" (OuterVolumeSpecName: "client-ca") pod "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" (UID: "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.985645 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-tmp" (OuterVolumeSpecName: "tmp") pod "192cc6bf-9cda-439b-a3ba-197e6eefb3a0" (UID: "192cc6bf-9cda-439b-a3ba-197e6eefb3a0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.985756 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-client-ca" (OuterVolumeSpecName: "client-ca") pod "192cc6bf-9cda-439b-a3ba-197e6eefb3a0" (UID: "192cc6bf-9cda-439b-a3ba-197e6eefb3a0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.985865 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "192cc6bf-9cda-439b-a3ba-197e6eefb3a0" (UID: "192cc6bf-9cda-439b-a3ba-197e6eefb3a0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.986391 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-tmp" (OuterVolumeSpecName: "tmp") pod "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" (UID: "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.986525 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-config" (OuterVolumeSpecName: "config") pod "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" (UID: "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.987101 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-config" (OuterVolumeSpecName: "config") pod "192cc6bf-9cda-439b-a3ba-197e6eefb3a0" (UID: "192cc6bf-9cda-439b-a3ba-197e6eefb3a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:21 crc kubenswrapper[5099]: I1212 15:27:21.991588 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c98797868-46glh"] Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:21.999976 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-kube-api-access-h8jkj" (OuterVolumeSpecName: "kube-api-access-h8jkj") pod "192cc6bf-9cda-439b-a3ba-197e6eefb3a0" (UID: "192cc6bf-9cda-439b-a3ba-197e6eefb3a0"). InnerVolumeSpecName "kube-api-access-h8jkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.000078 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-kube-api-access-nkcd9" (OuterVolumeSpecName: "kube-api-access-nkcd9") pod "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" (UID: "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a"). InnerVolumeSpecName "kube-api-access-nkcd9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.000533 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" (UID: "06d7a9f8-9cc7-4caa-b072-6f1c56abd81a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.005978 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "192cc6bf-9cda-439b-a3ba-197e6eefb3a0" (UID: "192cc6bf-9cda-439b-a3ba-197e6eefb3a0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.012396 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fdff84b8-scqjf"] Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.024126 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.029302 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fdff84b8-scqjf"] Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084349 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-config\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084442 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6znbf\" (UniqueName: \"kubernetes.io/projected/aeafd6f1-18b9-45c1-9246-76b139b3403b-kube-api-access-6znbf\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084518 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-client-ca\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084544 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/228ff066-39c5-45ff-99b4-fc91f91f5aec-tmp\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084576 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeafd6f1-18b9-45c1-9246-76b139b3403b-serving-cert\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084607 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228ff066-39c5-45ff-99b4-fc91f91f5aec-serving-cert\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084630 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-proxy-ca-bundles\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084655 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvcc5\" (UniqueName: \"kubernetes.io/projected/228ff066-39c5-45ff-99b4-fc91f91f5aec-kube-api-access-xvcc5\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084697 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-client-ca\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084717 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aeafd6f1-18b9-45c1-9246-76b139b3403b-tmp\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084853 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-config\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084910 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nkcd9\" (UniqueName: \"kubernetes.io/projected/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-kube-api-access-nkcd9\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084921 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084930 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084938 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8jkj\" (UniqueName: \"kubernetes.io/projected/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-kube-api-access-h8jkj\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084948 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084956 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084964 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084972 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084980 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cc6bf-9cda-439b-a3ba-197e6eefb3a0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084987 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.084995 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.085166 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/228ff066-39c5-45ff-99b4-fc91f91f5aec-tmp\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.086045 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-client-ca\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.086157 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-config\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.090277 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228ff066-39c5-45ff-99b4-fc91f91f5aec-serving-cert\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.102563 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvcc5\" (UniqueName: \"kubernetes.io/projected/228ff066-39c5-45ff-99b4-fc91f91f5aec-kube-api-access-xvcc5\") pod \"route-controller-manager-5c98797868-46glh\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.184897 5099 generic.go:358] "Generic (PLEG): container finished" podID="192cc6bf-9cda-439b-a3ba-197e6eefb3a0" containerID="ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d" exitCode=0 Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.184988 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.184988 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" event={"ID":"192cc6bf-9cda-439b-a3ba-197e6eefb3a0","Type":"ContainerDied","Data":"ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d"} Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.185068 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f4758b8-59zs8" event={"ID":"192cc6bf-9cda-439b-a3ba-197e6eefb3a0","Type":"ContainerDied","Data":"2adfbbd937ccddc6a5fd73b323de518ed2e9ec9d73936f58c219277de6e021dd"} Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.185091 5099 scope.go:117] "RemoveContainer" containerID="ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.185722 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6znbf\" (UniqueName: \"kubernetes.io/projected/aeafd6f1-18b9-45c1-9246-76b139b3403b-kube-api-access-6znbf\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.185793 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeafd6f1-18b9-45c1-9246-76b139b3403b-serving-cert\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.185834 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-proxy-ca-bundles\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.185957 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-client-ca\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.185992 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aeafd6f1-18b9-45c1-9246-76b139b3403b-tmp\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.186033 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-config\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.186815 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aeafd6f1-18b9-45c1-9246-76b139b3403b-tmp\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.187172 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-client-ca\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.187390 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-proxy-ca-bundles\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.188090 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-config\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.188429 5099 generic.go:358] "Generic (PLEG): container finished" podID="06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" containerID="7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319" exitCode=0 Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.188500 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.188538 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" event={"ID":"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a","Type":"ContainerDied","Data":"7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319"} Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.188979 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf" event={"ID":"06d7a9f8-9cc7-4caa-b072-6f1c56abd81a","Type":"ContainerDied","Data":"e0afb50290ce8112fbba728586e3f6270f63572f180644a7c4135ff60e5b7eb2"} Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.190795 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeafd6f1-18b9-45c1-9246-76b139b3403b-serving-cert\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.209091 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6znbf\" (UniqueName: \"kubernetes.io/projected/aeafd6f1-18b9-45c1-9246-76b139b3403b-kube-api-access-6znbf\") pod \"controller-manager-5fdff84b8-scqjf\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.212955 5099 scope.go:117] "RemoveContainer" containerID="ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d" Dec 12 15:27:22 crc kubenswrapper[5099]: E1212 15:27:22.213770 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d\": container with ID starting with ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d not found: ID does not exist" containerID="ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.213841 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d"} err="failed to get container status \"ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d\": rpc error: code = NotFound desc = could not find container \"ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d\": container with ID starting with ffad854c7baad74a0ad470b56a5b64c995f0f00260136080d80092448ac6b45d not found: ID does not exist" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.213877 5099 scope.go:117] "RemoveContainer" containerID="7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.230911 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf"] Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.238864 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cc6f5584-xfnqf"] Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.242912 5099 scope.go:117] "RemoveContainer" containerID="7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319" Dec 12 15:27:22 crc kubenswrapper[5099]: E1212 15:27:22.243379 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319\": container with ID starting with 7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319 not found: ID does not exist" containerID="7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.243422 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319"} err="failed to get container status \"7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319\": rpc error: code = NotFound desc = could not find container \"7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319\": container with ID starting with 7f22e7563d62ba2351089b0953b1fa03861084a66b7c7baffebdd3d6e21de319 not found: ID does not exist" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.244428 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758f4758b8-59zs8"] Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.248482 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-758f4758b8-59zs8"] Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.346283 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.360840 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.476242 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d7a9f8-9cc7-4caa-b072-6f1c56abd81a" path="/var/lib/kubelet/pods/06d7a9f8-9cc7-4caa-b072-6f1c56abd81a/volumes" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.476902 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192cc6bf-9cda-439b-a3ba-197e6eefb3a0" path="/var/lib/kubelet/pods/192cc6bf-9cda-439b-a3ba-197e6eefb3a0/volumes" Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.909687 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c98797868-46glh"] Dec 12 15:27:22 crc kubenswrapper[5099]: I1212 15:27:22.983260 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fdff84b8-scqjf"] Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.195830 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" event={"ID":"228ff066-39c5-45ff-99b4-fc91f91f5aec","Type":"ContainerStarted","Data":"98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48"} Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.195886 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" event={"ID":"228ff066-39c5-45ff-99b4-fc91f91f5aec","Type":"ContainerStarted","Data":"9223bb0e73fe89c85c29651eda6bd449ec4c2070f32700e9a694e934f7992ee0"} Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.196102 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.198427 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" event={"ID":"aeafd6f1-18b9-45c1-9246-76b139b3403b","Type":"ContainerStarted","Data":"f843f8edee90ec3f0e8e9331adcacbab184e62d7284c0843019437c1753ef99a"} Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.198477 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" event={"ID":"aeafd6f1-18b9-45c1-9246-76b139b3403b","Type":"ContainerStarted","Data":"764cb353221f8060bbc0d70788a87f98035d6687833d781d2b3f605008083daa"} Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.198602 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.199709 5099 patch_prober.go:28] interesting pod/controller-manager-5fdff84b8-scqjf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.199779 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" podUID="aeafd6f1-18b9-45c1-9246-76b139b3403b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.213342 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" podStartSLOduration=2.213320233 podStartE2EDuration="2.213320233s" podCreationTimestamp="2025-12-12 15:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:27:23.211908756 +0000 UTC m=+381.315817397" watchObservedRunningTime="2025-12-12 15:27:23.213320233 +0000 UTC m=+381.317228874" Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.410484 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" podStartSLOduration=2.410450601 podStartE2EDuration="2.410450601s" podCreationTimestamp="2025-12-12 15:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:27:23.407200296 +0000 UTC m=+381.511108957" watchObservedRunningTime="2025-12-12 15:27:23.410450601 +0000 UTC m=+381.514359242" Dec 12 15:27:23 crc kubenswrapper[5099]: I1212 15:27:23.736045 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:24 crc kubenswrapper[5099]: I1212 15:27:24.211354 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:29 crc kubenswrapper[5099]: I1212 15:27:29.498949 5099 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 15:27:29 crc kubenswrapper[5099]: I1212 15:27:29.500153 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://e0baaf5b411c448fa82d5c594b0bffefa15c9b0495514ab32bbdc68ecc0b034e" gracePeriod=5 Dec 12 15:27:31 crc kubenswrapper[5099]: I1212 15:27:31.494514 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.290863 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.291178 5099 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="e0baaf5b411c448fa82d5c594b0bffefa15c9b0495514ab32bbdc68ecc0b034e" exitCode=137 Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.402638 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.402760 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537229 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537363 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537403 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537458 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537509 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537535 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537554 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537578 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.537902 5099 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.538001 5099 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.538030 5099 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.538072 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.547068 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.639089 5099 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:35 crc kubenswrapper[5099]: I1212 15:27:35.639180 5099 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.299441 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.299616 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.299718 5099 scope.go:117] "RemoveContainer" containerID="e0baaf5b411c448fa82d5c594b0bffefa15c9b0495514ab32bbdc68ecc0b034e" Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.475888 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.476185 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.600635 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.600704 5099 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ab7eb083-83f2-48aa-a35c-3070d0fe3e91" Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.600730 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 15:27:36 crc kubenswrapper[5099]: I1212 15:27:36.600743 5099 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ab7eb083-83f2-48aa-a35c-3070d0fe3e91" Dec 12 15:27:37 crc kubenswrapper[5099]: I1212 15:27:37.217971 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 15:27:41 crc kubenswrapper[5099]: I1212 15:27:41.588770 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fdff84b8-scqjf"] Dec 12 15:27:41 crc kubenswrapper[5099]: I1212 15:27:41.589400 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" podUID="aeafd6f1-18b9-45c1-9246-76b139b3403b" containerName="controller-manager" containerID="cri-o://f843f8edee90ec3f0e8e9331adcacbab184e62d7284c0843019437c1753ef99a" gracePeriod=30 Dec 12 15:27:41 crc kubenswrapper[5099]: I1212 15:27:41.610735 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c98797868-46glh"] Dec 12 15:27:41 crc kubenswrapper[5099]: I1212 15:27:41.611303 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" podUID="228ff066-39c5-45ff-99b4-fc91f91f5aec" containerName="route-controller-manager" containerID="cri-o://98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48" gracePeriod=30 Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.084140 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.111619 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228ff066-39c5-45ff-99b4-fc91f91f5aec-serving-cert\") pod \"228ff066-39c5-45ff-99b4-fc91f91f5aec\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.111736 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/228ff066-39c5-45ff-99b4-fc91f91f5aec-tmp\") pod \"228ff066-39c5-45ff-99b4-fc91f91f5aec\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.111774 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-config\") pod \"228ff066-39c5-45ff-99b4-fc91f91f5aec\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.111827 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvcc5\" (UniqueName: \"kubernetes.io/projected/228ff066-39c5-45ff-99b4-fc91f91f5aec-kube-api-access-xvcc5\") pod \"228ff066-39c5-45ff-99b4-fc91f91f5aec\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.111917 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-client-ca\") pod \"228ff066-39c5-45ff-99b4-fc91f91f5aec\" (UID: \"228ff066-39c5-45ff-99b4-fc91f91f5aec\") " Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.112347 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228ff066-39c5-45ff-99b4-fc91f91f5aec-tmp" (OuterVolumeSpecName: "tmp") pod "228ff066-39c5-45ff-99b4-fc91f91f5aec" (UID: "228ff066-39c5-45ff-99b4-fc91f91f5aec"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.112736 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-client-ca" (OuterVolumeSpecName: "client-ca") pod "228ff066-39c5-45ff-99b4-fc91f91f5aec" (UID: "228ff066-39c5-45ff-99b4-fc91f91f5aec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.112782 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-config" (OuterVolumeSpecName: "config") pod "228ff066-39c5-45ff-99b4-fc91f91f5aec" (UID: "228ff066-39c5-45ff-99b4-fc91f91f5aec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.120243 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228ff066-39c5-45ff-99b4-fc91f91f5aec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "228ff066-39c5-45ff-99b4-fc91f91f5aec" (UID: "228ff066-39c5-45ff-99b4-fc91f91f5aec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.120499 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr"] Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.121199 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="228ff066-39c5-45ff-99b4-fc91f91f5aec" containerName="route-controller-manager" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.121239 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="228ff066-39c5-45ff-99b4-fc91f91f5aec" containerName="route-controller-manager" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.121268 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.121278 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.121406 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.121425 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="228ff066-39c5-45ff-99b4-fc91f91f5aec" containerName="route-controller-manager" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.124412 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228ff066-39c5-45ff-99b4-fc91f91f5aec-kube-api-access-xvcc5" (OuterVolumeSpecName: "kube-api-access-xvcc5") pod "228ff066-39c5-45ff-99b4-fc91f91f5aec" (UID: "228ff066-39c5-45ff-99b4-fc91f91f5aec"). InnerVolumeSpecName "kube-api-access-xvcc5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.272708 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.272745 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228ff066-39c5-45ff-99b4-fc91f91f5aec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.272756 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/228ff066-39c5-45ff-99b4-fc91f91f5aec-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.272764 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228ff066-39c5-45ff-99b4-fc91f91f5aec-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.272780 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvcc5\" (UniqueName: \"kubernetes.io/projected/228ff066-39c5-45ff-99b4-fc91f91f5aec-kube-api-access-xvcc5\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.273198 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.273015 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr"] Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.419614 5099 generic.go:358] "Generic (PLEG): container finished" podID="228ff066-39c5-45ff-99b4-fc91f91f5aec" containerID="98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48" exitCode=0 Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.419725 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" event={"ID":"228ff066-39c5-45ff-99b4-fc91f91f5aec","Type":"ContainerDied","Data":"98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48"} Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.419754 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.419788 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c98797868-46glh" event={"ID":"228ff066-39c5-45ff-99b4-fc91f91f5aec","Type":"ContainerDied","Data":"9223bb0e73fe89c85c29651eda6bd449ec4c2070f32700e9a694e934f7992ee0"} Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.419809 5099 scope.go:117] "RemoveContainer" containerID="98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.424000 5099 generic.go:358] "Generic (PLEG): container finished" podID="aeafd6f1-18b9-45c1-9246-76b139b3403b" containerID="f843f8edee90ec3f0e8e9331adcacbab184e62d7284c0843019437c1753ef99a" exitCode=0 Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.424188 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" event={"ID":"aeafd6f1-18b9-45c1-9246-76b139b3403b","Type":"ContainerDied","Data":"f843f8edee90ec3f0e8e9331adcacbab184e62d7284c0843019437c1753ef99a"} Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.440100 5099 scope.go:117] "RemoveContainer" containerID="98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48" Dec 12 15:27:42 crc kubenswrapper[5099]: E1212 15:27:42.440566 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48\": container with ID starting with 98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48 not found: ID does not exist" containerID="98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.440608 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48"} err="failed to get container status \"98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48\": rpc error: code = NotFound desc = could not find container \"98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48\": container with ID starting with 98f37849b14c71dd85871e94eb27ba575fa451f03b44f359c60868e99bec1f48 not found: ID does not exist" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.458852 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c98797868-46glh"] Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.475111 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c98797868-46glh"] Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.575397 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78b705e4-1592-4b8a-a923-8d69a5d797c7-tmp\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.575493 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78b705e4-1592-4b8a-a923-8d69a5d797c7-serving-cert\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.575548 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc88v\" (UniqueName: \"kubernetes.io/projected/78b705e4-1592-4b8a-a923-8d69a5d797c7-kube-api-access-pc88v\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.575581 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-client-ca\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.575625 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-config\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.676344 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78b705e4-1592-4b8a-a923-8d69a5d797c7-serving-cert\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.676446 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pc88v\" (UniqueName: \"kubernetes.io/projected/78b705e4-1592-4b8a-a923-8d69a5d797c7-kube-api-access-pc88v\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.676502 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-client-ca\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.676759 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-config\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.676927 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78b705e4-1592-4b8a-a923-8d69a5d797c7-tmp\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.677310 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78b705e4-1592-4b8a-a923-8d69a5d797c7-tmp\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.677962 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-config\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.678818 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-client-ca\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.681452 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78b705e4-1592-4b8a-a923-8d69a5d797c7-serving-cert\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.692397 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc88v\" (UniqueName: \"kubernetes.io/projected/78b705e4-1592-4b8a-a923-8d69a5d797c7-kube-api-access-pc88v\") pod \"route-controller-manager-67968f59d6-z9ntr\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.722112 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.823505 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.921022 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-b995s"] Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.921687 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aeafd6f1-18b9-45c1-9246-76b139b3403b" containerName="controller-manager" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.921706 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeafd6f1-18b9-45c1-9246-76b139b3403b" containerName="controller-manager" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.921832 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="aeafd6f1-18b9-45c1-9246-76b139b3403b" containerName="controller-manager" Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.946788 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-b995s"] Dec 12 15:27:42 crc kubenswrapper[5099]: I1212 15:27:42.946961 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.010541 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeafd6f1-18b9-45c1-9246-76b139b3403b-serving-cert\") pod \"aeafd6f1-18b9-45c1-9246-76b139b3403b\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.010627 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-client-ca\") pod \"aeafd6f1-18b9-45c1-9246-76b139b3403b\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.010774 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aeafd6f1-18b9-45c1-9246-76b139b3403b-tmp\") pod \"aeafd6f1-18b9-45c1-9246-76b139b3403b\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.010822 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-proxy-ca-bundles\") pod \"aeafd6f1-18b9-45c1-9246-76b139b3403b\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.010924 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-config\") pod \"aeafd6f1-18b9-45c1-9246-76b139b3403b\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.010978 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6znbf\" (UniqueName: \"kubernetes.io/projected/aeafd6f1-18b9-45c1-9246-76b139b3403b-kube-api-access-6znbf\") pod \"aeafd6f1-18b9-45c1-9246-76b139b3403b\" (UID: \"aeafd6f1-18b9-45c1-9246-76b139b3403b\") " Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.013233 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6k9m\" (UniqueName: \"kubernetes.io/projected/74d267f7-eca7-49e2-8572-da9257c2e9f1-kube-api-access-h6k9m\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.013322 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-client-ca\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.013400 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-config\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.013478 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d267f7-eca7-49e2-8572-da9257c2e9f1-serving-cert\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.013508 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-proxy-ca-bundles\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.013535 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74d267f7-eca7-49e2-8572-da9257c2e9f1-tmp\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.014370 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-client-ca" (OuterVolumeSpecName: "client-ca") pod "aeafd6f1-18b9-45c1-9246-76b139b3403b" (UID: "aeafd6f1-18b9-45c1-9246-76b139b3403b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.014463 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-config" (OuterVolumeSpecName: "config") pod "aeafd6f1-18b9-45c1-9246-76b139b3403b" (UID: "aeafd6f1-18b9-45c1-9246-76b139b3403b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.014475 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aeafd6f1-18b9-45c1-9246-76b139b3403b" (UID: "aeafd6f1-18b9-45c1-9246-76b139b3403b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.014909 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeafd6f1-18b9-45c1-9246-76b139b3403b-tmp" (OuterVolumeSpecName: "tmp") pod "aeafd6f1-18b9-45c1-9246-76b139b3403b" (UID: "aeafd6f1-18b9-45c1-9246-76b139b3403b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.017674 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeafd6f1-18b9-45c1-9246-76b139b3403b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aeafd6f1-18b9-45c1-9246-76b139b3403b" (UID: "aeafd6f1-18b9-45c1-9246-76b139b3403b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.018497 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeafd6f1-18b9-45c1-9246-76b139b3403b-kube-api-access-6znbf" (OuterVolumeSpecName: "kube-api-access-6znbf") pod "aeafd6f1-18b9-45c1-9246-76b139b3403b" (UID: "aeafd6f1-18b9-45c1-9246-76b139b3403b"). InnerVolumeSpecName "kube-api-access-6znbf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.025377 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr"] Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.098345 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116556 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-proxy-ca-bundles\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116609 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74d267f7-eca7-49e2-8572-da9257c2e9f1-tmp\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116644 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h6k9m\" (UniqueName: \"kubernetes.io/projected/74d267f7-eca7-49e2-8572-da9257c2e9f1-kube-api-access-h6k9m\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116761 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-client-ca\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116811 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-config\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116884 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d267f7-eca7-49e2-8572-da9257c2e9f1-serving-cert\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116940 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aeafd6f1-18b9-45c1-9246-76b139b3403b-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116958 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116970 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116982 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6znbf\" (UniqueName: \"kubernetes.io/projected/aeafd6f1-18b9-45c1-9246-76b139b3403b-kube-api-access-6znbf\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.116993 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeafd6f1-18b9-45c1-9246-76b139b3403b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.117003 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeafd6f1-18b9-45c1-9246-76b139b3403b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.118425 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74d267f7-eca7-49e2-8572-da9257c2e9f1-tmp\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.118894 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-client-ca\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.119605 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-config\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.119873 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-proxy-ca-bundles\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.121602 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d267f7-eca7-49e2-8572-da9257c2e9f1-serving-cert\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.133905 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6k9m\" (UniqueName: \"kubernetes.io/projected/74d267f7-eca7-49e2-8572-da9257c2e9f1-kube-api-access-h6k9m\") pod \"controller-manager-5c7946bf8b-b995s\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.254656 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.261804 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.432309 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" event={"ID":"aeafd6f1-18b9-45c1-9246-76b139b3403b","Type":"ContainerDied","Data":"764cb353221f8060bbc0d70788a87f98035d6687833d781d2b3f605008083daa"} Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.432336 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fdff84b8-scqjf" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.432406 5099 scope.go:117] "RemoveContainer" containerID="f843f8edee90ec3f0e8e9331adcacbab184e62d7284c0843019437c1753ef99a" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.435942 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" event={"ID":"78b705e4-1592-4b8a-a923-8d69a5d797c7","Type":"ContainerStarted","Data":"fc0438c906658bc92d860035208d800e2c844730139b353326d491bd5d11e6f6"} Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.435980 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" event={"ID":"78b705e4-1592-4b8a-a923-8d69a5d797c7","Type":"ContainerStarted","Data":"4a4218aebefebfecba112f63de57511af3f7101d0897a7c86b4e186400384584"} Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.436796 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.455363 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" podStartSLOduration=2.455332823 podStartE2EDuration="2.455332823s" podCreationTimestamp="2025-12-12 15:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:27:43.455184199 +0000 UTC m=+401.559092850" watchObservedRunningTime="2025-12-12 15:27:43.455332823 +0000 UTC m=+401.559241464" Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.477952 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fdff84b8-scqjf"] Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.481829 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fdff84b8-scqjf"] Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.751998 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-b995s"] Dec 12 15:27:43 crc kubenswrapper[5099]: W1212 15:27:43.757412 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74d267f7_eca7_49e2_8572_da9257c2e9f1.slice/crio-4210efe7f45cc739d1234f7daad3f7d938d1767bac3030a06b2734fce6520326 WatchSource:0}: Error finding container 4210efe7f45cc739d1234f7daad3f7d938d1767bac3030a06b2734fce6520326: Status 404 returned error can't find the container with id 4210efe7f45cc739d1234f7daad3f7d938d1767bac3030a06b2734fce6520326 Dec 12 15:27:43 crc kubenswrapper[5099]: I1212 15:27:43.877556 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:44 crc kubenswrapper[5099]: I1212 15:27:44.448084 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" event={"ID":"74d267f7-eca7-49e2-8572-da9257c2e9f1","Type":"ContainerStarted","Data":"a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82"} Dec 12 15:27:44 crc kubenswrapper[5099]: I1212 15:27:44.448128 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" event={"ID":"74d267f7-eca7-49e2-8572-da9257c2e9f1","Type":"ContainerStarted","Data":"4210efe7f45cc739d1234f7daad3f7d938d1767bac3030a06b2734fce6520326"} Dec 12 15:27:44 crc kubenswrapper[5099]: I1212 15:27:44.448422 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:44 crc kubenswrapper[5099]: I1212 15:27:44.467097 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" podStartSLOduration=3.467067781 podStartE2EDuration="3.467067781s" podCreationTimestamp="2025-12-12 15:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:27:44.465363337 +0000 UTC m=+402.569271988" watchObservedRunningTime="2025-12-12 15:27:44.467067781 +0000 UTC m=+402.570976422" Dec 12 15:27:44 crc kubenswrapper[5099]: I1212 15:27:44.480896 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228ff066-39c5-45ff-99b4-fc91f91f5aec" path="/var/lib/kubelet/pods/228ff066-39c5-45ff-99b4-fc91f91f5aec/volumes" Dec 12 15:27:44 crc kubenswrapper[5099]: I1212 15:27:44.481714 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeafd6f1-18b9-45c1-9246-76b139b3403b" path="/var/lib/kubelet/pods/aeafd6f1-18b9-45c1-9246-76b139b3403b/volumes" Dec 12 15:27:44 crc kubenswrapper[5099]: I1212 15:27:44.497986 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:47 crc kubenswrapper[5099]: I1212 15:27:47.739109 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 15:27:48 crc kubenswrapper[5099]: I1212 15:27:48.225762 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-b995s"] Dec 12 15:27:48 crc kubenswrapper[5099]: I1212 15:27:48.226199 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" podUID="74d267f7-eca7-49e2-8572-da9257c2e9f1" containerName="controller-manager" containerID="cri-o://a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82" gracePeriod=30 Dec 12 15:27:48 crc kubenswrapper[5099]: I1212 15:27:48.241795 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr"] Dec 12 15:27:48 crc kubenswrapper[5099]: I1212 15:27:48.242562 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" podUID="78b705e4-1592-4b8a-a923-8d69a5d797c7" containerName="route-controller-manager" containerID="cri-o://fc0438c906658bc92d860035208d800e2c844730139b353326d491bd5d11e6f6" gracePeriod=30 Dec 12 15:27:48 crc kubenswrapper[5099]: I1212 15:27:48.325571 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.485183 5099 generic.go:358] "Generic (PLEG): container finished" podID="74d267f7-eca7-49e2-8572-da9257c2e9f1" containerID="a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82" exitCode=0 Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.485608 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.485294 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" event={"ID":"74d267f7-eca7-49e2-8572-da9257c2e9f1","Type":"ContainerDied","Data":"a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82"} Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.485818 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" event={"ID":"74d267f7-eca7-49e2-8572-da9257c2e9f1","Type":"ContainerDied","Data":"4210efe7f45cc739d1234f7daad3f7d938d1767bac3030a06b2734fce6520326"} Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.485862 5099 scope.go:117] "RemoveContainer" containerID="a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.487573 5099 generic.go:358] "Generic (PLEG): container finished" podID="78b705e4-1592-4b8a-a923-8d69a5d797c7" containerID="fc0438c906658bc92d860035208d800e2c844730139b353326d491bd5d11e6f6" exitCode=0 Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.487752 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" event={"ID":"78b705e4-1592-4b8a-a923-8d69a5d797c7","Type":"ContainerDied","Data":"fc0438c906658bc92d860035208d800e2c844730139b353326d491bd5d11e6f6"} Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.503175 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-client-ca\") pod \"74d267f7-eca7-49e2-8572-da9257c2e9f1\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.503293 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d267f7-eca7-49e2-8572-da9257c2e9f1-serving-cert\") pod \"74d267f7-eca7-49e2-8572-da9257c2e9f1\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.503441 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-config\") pod \"74d267f7-eca7-49e2-8572-da9257c2e9f1\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.503477 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-proxy-ca-bundles\") pod \"74d267f7-eca7-49e2-8572-da9257c2e9f1\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.503541 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74d267f7-eca7-49e2-8572-da9257c2e9f1-tmp\") pod \"74d267f7-eca7-49e2-8572-da9257c2e9f1\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.503559 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6k9m\" (UniqueName: \"kubernetes.io/projected/74d267f7-eca7-49e2-8572-da9257c2e9f1-kube-api-access-h6k9m\") pod \"74d267f7-eca7-49e2-8572-da9257c2e9f1\" (UID: \"74d267f7-eca7-49e2-8572-da9257c2e9f1\") " Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.503977 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-client-ca" (OuterVolumeSpecName: "client-ca") pod "74d267f7-eca7-49e2-8572-da9257c2e9f1" (UID: "74d267f7-eca7-49e2-8572-da9257c2e9f1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.504477 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-config" (OuterVolumeSpecName: "config") pod "74d267f7-eca7-49e2-8572-da9257c2e9f1" (UID: "74d267f7-eca7-49e2-8572-da9257c2e9f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.504854 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d267f7-eca7-49e2-8572-da9257c2e9f1-tmp" (OuterVolumeSpecName: "tmp") pod "74d267f7-eca7-49e2-8572-da9257c2e9f1" (UID: "74d267f7-eca7-49e2-8572-da9257c2e9f1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.505041 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "74d267f7-eca7-49e2-8572-da9257c2e9f1" (UID: "74d267f7-eca7-49e2-8572-da9257c2e9f1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.506988 5099 scope.go:117] "RemoveContainer" containerID="a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82" Dec 12 15:27:49 crc kubenswrapper[5099]: E1212 15:27:49.513404 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82\": container with ID starting with a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82 not found: ID does not exist" containerID="a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.513461 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82"} err="failed to get container status \"a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82\": rpc error: code = NotFound desc = could not find container \"a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82\": container with ID starting with a4f43a1c86c118af3fe1cdc1e22d2431400bfdff5f18f49b225df83a0229bf82 not found: ID does not exist" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.520334 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d267f7-eca7-49e2-8572-da9257c2e9f1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "74d267f7-eca7-49e2-8572-da9257c2e9f1" (UID: "74d267f7-eca7-49e2-8572-da9257c2e9f1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.520894 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d267f7-eca7-49e2-8572-da9257c2e9f1-kube-api-access-h6k9m" (OuterVolumeSpecName: "kube-api-access-h6k9m") pod "74d267f7-eca7-49e2-8572-da9257c2e9f1" (UID: "74d267f7-eca7-49e2-8572-da9257c2e9f1"). InnerVolumeSpecName "kube-api-access-h6k9m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.528094 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv"] Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.528712 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74d267f7-eca7-49e2-8572-da9257c2e9f1" containerName="controller-manager" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.528734 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d267f7-eca7-49e2-8572-da9257c2e9f1" containerName="controller-manager" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.528834 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="74d267f7-eca7-49e2-8572-da9257c2e9f1" containerName="controller-manager" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.604515 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.604550 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.604560 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74d267f7-eca7-49e2-8572-da9257c2e9f1-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.604567 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h6k9m\" (UniqueName: \"kubernetes.io/projected/74d267f7-eca7-49e2-8572-da9257c2e9f1-kube-api-access-h6k9m\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.604576 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d267f7-eca7-49e2-8572-da9257c2e9f1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.604584 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d267f7-eca7-49e2-8572-da9257c2e9f1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.939893 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv"] Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.942030 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.942709 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-proxy-ca-bundles\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.942766 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-client-ca\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.942802 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhcc2\" (UniqueName: \"kubernetes.io/projected/9d4e7041-93c8-48a9-938a-f7e0815d28c3-kube-api-access-dhcc2\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.942829 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-config\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.942856 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4e7041-93c8-48a9-938a-f7e0815d28c3-serving-cert\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:49 crc kubenswrapper[5099]: I1212 15:27:49.942881 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d4e7041-93c8-48a9-938a-f7e0815d28c3-tmp\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.044225 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-proxy-ca-bundles\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.044283 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-client-ca\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.044320 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhcc2\" (UniqueName: \"kubernetes.io/projected/9d4e7041-93c8-48a9-938a-f7e0815d28c3-kube-api-access-dhcc2\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.044345 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-config\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.044426 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4e7041-93c8-48a9-938a-f7e0815d28c3-serving-cert\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.044459 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d4e7041-93c8-48a9-938a-f7e0815d28c3-tmp\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.045065 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d4e7041-93c8-48a9-938a-f7e0815d28c3-tmp\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.045806 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-proxy-ca-bundles\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.046545 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-client-ca\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.047021 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-config\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.068848 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4e7041-93c8-48a9-938a-f7e0815d28c3-serving-cert\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.082738 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhcc2\" (UniqueName: \"kubernetes.io/projected/9d4e7041-93c8-48a9-938a-f7e0815d28c3-kube-api-access-dhcc2\") pod \"controller-manager-7ffdd5878f-p2wlv\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.238444 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.254684 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78b705e4-1592-4b8a-a923-8d69a5d797c7-tmp\") pod \"78b705e4-1592-4b8a-a923-8d69a5d797c7\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.254743 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-client-ca\") pod \"78b705e4-1592-4b8a-a923-8d69a5d797c7\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.254804 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc88v\" (UniqueName: \"kubernetes.io/projected/78b705e4-1592-4b8a-a923-8d69a5d797c7-kube-api-access-pc88v\") pod \"78b705e4-1592-4b8a-a923-8d69a5d797c7\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.254836 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78b705e4-1592-4b8a-a923-8d69a5d797c7-serving-cert\") pod \"78b705e4-1592-4b8a-a923-8d69a5d797c7\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.254886 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-config\") pod \"78b705e4-1592-4b8a-a923-8d69a5d797c7\" (UID: \"78b705e4-1592-4b8a-a923-8d69a5d797c7\") " Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.255308 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b705e4-1592-4b8a-a923-8d69a5d797c7-tmp" (OuterVolumeSpecName: "tmp") pod "78b705e4-1592-4b8a-a923-8d69a5d797c7" (UID: "78b705e4-1592-4b8a-a923-8d69a5d797c7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.255804 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-config" (OuterVolumeSpecName: "config") pod "78b705e4-1592-4b8a-a923-8d69a5d797c7" (UID: "78b705e4-1592-4b8a-a923-8d69a5d797c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.255903 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-client-ca" (OuterVolumeSpecName: "client-ca") pod "78b705e4-1592-4b8a-a923-8d69a5d797c7" (UID: "78b705e4-1592-4b8a-a923-8d69a5d797c7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.264982 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b705e4-1592-4b8a-a923-8d69a5d797c7-kube-api-access-pc88v" (OuterVolumeSpecName: "kube-api-access-pc88v") pod "78b705e4-1592-4b8a-a923-8d69a5d797c7" (UID: "78b705e4-1592-4b8a-a923-8d69a5d797c7"). InnerVolumeSpecName "kube-api-access-pc88v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.265842 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b705e4-1592-4b8a-a923-8d69a5d797c7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "78b705e4-1592-4b8a-a923-8d69a5d797c7" (UID: "78b705e4-1592-4b8a-a923-8d69a5d797c7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.269330 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn"] Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.270070 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78b705e4-1592-4b8a-a923-8d69a5d797c7" containerName="route-controller-manager" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.270097 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b705e4-1592-4b8a-a923-8d69a5d797c7" containerName="route-controller-manager" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.270185 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="78b705e4-1592-4b8a-a923-8d69a5d797c7" containerName="route-controller-manager" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.276576 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.286877 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn"] Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.303209 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356085 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk5vf\" (UniqueName: \"kubernetes.io/projected/8536091f-d7d2-4684-b7d3-835d8cb05b55-kube-api-access-xk5vf\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356177 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-client-ca\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356331 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8536091f-d7d2-4684-b7d3-835d8cb05b55-tmp\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356420 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-config\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356445 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8536091f-d7d2-4684-b7d3-835d8cb05b55-serving-cert\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356569 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78b705e4-1592-4b8a-a923-8d69a5d797c7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356593 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356609 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78b705e4-1592-4b8a-a923-8d69a5d797c7-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356620 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78b705e4-1592-4b8a-a923-8d69a5d797c7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.356633 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pc88v\" (UniqueName: \"kubernetes.io/projected/78b705e4-1592-4b8a-a923-8d69a5d797c7-kube-api-access-pc88v\") on node \"crc\" DevicePath \"\"" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.457568 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-client-ca\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.458193 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8536091f-d7d2-4684-b7d3-835d8cb05b55-tmp\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.458242 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-config\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.458267 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8536091f-d7d2-4684-b7d3-835d8cb05b55-serving-cert\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.458356 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xk5vf\" (UniqueName: \"kubernetes.io/projected/8536091f-d7d2-4684-b7d3-835d8cb05b55-kube-api-access-xk5vf\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.458754 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-client-ca\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.459094 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8536091f-d7d2-4684-b7d3-835d8cb05b55-tmp\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.461334 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-config\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.469347 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8536091f-d7d2-4684-b7d3-835d8cb05b55-serving-cert\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.477264 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk5vf\" (UniqueName: \"kubernetes.io/projected/8536091f-d7d2-4684-b7d3-835d8cb05b55-kube-api-access-xk5vf\") pod \"route-controller-manager-d5d46c6b7-hxvjn\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.499406 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7946bf8b-b995s" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.503133 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" event={"ID":"78b705e4-1592-4b8a-a923-8d69a5d797c7","Type":"ContainerDied","Data":"4a4218aebefebfecba112f63de57511af3f7101d0897a7c86b4e186400384584"} Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.503203 5099 scope.go:117] "RemoveContainer" containerID="fc0438c906658bc92d860035208d800e2c844730139b353326d491bd5d11e6f6" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.503323 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.531115 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-b995s"] Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.537998 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-b995s"] Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.542719 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv"] Dec 12 15:27:50 crc kubenswrapper[5099]: W1212 15:27:50.545934 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d4e7041_93c8_48a9_938a_f7e0815d28c3.slice/crio-00a6f6c84c7258c5a5d57889355b55f597c79c791a0a8ef3e90ede873f847ea2 WatchSource:0}: Error finding container 00a6f6c84c7258c5a5d57889355b55f597c79c791a0a8ef3e90ede873f847ea2: Status 404 returned error can't find the container with id 00a6f6c84c7258c5a5d57889355b55f597c79c791a0a8ef3e90ede873f847ea2 Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.547451 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr"] Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.551121 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-z9ntr"] Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.598045 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:50 crc kubenswrapper[5099]: I1212 15:27:50.793951 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn"] Dec 12 15:27:50 crc kubenswrapper[5099]: W1212 15:27:50.802151 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8536091f_d7d2_4684_b7d3_835d8cb05b55.slice/crio-53c26760ec3ed123d0ed0d5d742c750bf410ce24ee6e2bc2a1633df64972a5dc WatchSource:0}: Error finding container 53c26760ec3ed123d0ed0d5d742c750bf410ce24ee6e2bc2a1633df64972a5dc: Status 404 returned error can't find the container with id 53c26760ec3ed123d0ed0d5d742c750bf410ce24ee6e2bc2a1633df64972a5dc Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.510551 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" event={"ID":"9d4e7041-93c8-48a9-938a-f7e0815d28c3","Type":"ContainerStarted","Data":"d347c04e6f18d410a03fccc380d567f20bee38b682e69bf68ebfd76f32518621"} Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.510965 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" event={"ID":"9d4e7041-93c8-48a9-938a-f7e0815d28c3","Type":"ContainerStarted","Data":"00a6f6c84c7258c5a5d57889355b55f597c79c791a0a8ef3e90ede873f847ea2"} Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.510991 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.511976 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" event={"ID":"8536091f-d7d2-4684-b7d3-835d8cb05b55","Type":"ContainerStarted","Data":"fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2"} Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.512002 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" event={"ID":"8536091f-d7d2-4684-b7d3-835d8cb05b55","Type":"ContainerStarted","Data":"53c26760ec3ed123d0ed0d5d742c750bf410ce24ee6e2bc2a1633df64972a5dc"} Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.512750 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.518632 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.529326 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" podStartSLOduration=3.529301795 podStartE2EDuration="3.529301795s" podCreationTimestamp="2025-12-12 15:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:27:51.528224677 +0000 UTC m=+409.632133328" watchObservedRunningTime="2025-12-12 15:27:51.529301795 +0000 UTC m=+409.633210436" Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.549376 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:27:51 crc kubenswrapper[5099]: I1212 15:27:51.550653 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" podStartSLOduration=3.550629281 podStartE2EDuration="3.550629281s" podCreationTimestamp="2025-12-12 15:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:27:51.54711649 +0000 UTC m=+409.651025141" watchObservedRunningTime="2025-12-12 15:27:51.550629281 +0000 UTC m=+409.654537922" Dec 12 15:27:52 crc kubenswrapper[5099]: I1212 15:27:52.475875 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d267f7-eca7-49e2-8572-da9257c2e9f1" path="/var/lib/kubelet/pods/74d267f7-eca7-49e2-8572-da9257c2e9f1/volumes" Dec 12 15:27:52 crc kubenswrapper[5099]: I1212 15:27:52.476936 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b705e4-1592-4b8a-a923-8d69a5d797c7" path="/var/lib/kubelet/pods/78b705e4-1592-4b8a-a923-8d69a5d797c7/volumes" Dec 12 15:28:07 crc kubenswrapper[5099]: I1212 15:28:07.769438 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv"] Dec 12 15:28:07 crc kubenswrapper[5099]: I1212 15:28:07.770194 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" podUID="9d4e7041-93c8-48a9-938a-f7e0815d28c3" containerName="controller-manager" containerID="cri-o://d347c04e6f18d410a03fccc380d567f20bee38b682e69bf68ebfd76f32518621" gracePeriod=30 Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.018695 5099 generic.go:358] "Generic (PLEG): container finished" podID="9d4e7041-93c8-48a9-938a-f7e0815d28c3" containerID="d347c04e6f18d410a03fccc380d567f20bee38b682e69bf68ebfd76f32518621" exitCode=0 Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.018735 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" event={"ID":"9d4e7041-93c8-48a9-938a-f7e0815d28c3","Type":"ContainerDied","Data":"d347c04e6f18d410a03fccc380d567f20bee38b682e69bf68ebfd76f32518621"} Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.551443 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.589592 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw"] Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.590346 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d4e7041-93c8-48a9-938a-f7e0815d28c3" containerName="controller-manager" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.590372 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4e7041-93c8-48a9-938a-f7e0815d28c3" containerName="controller-manager" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.590500 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d4e7041-93c8-48a9-938a-f7e0815d28c3" containerName="controller-manager" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.613298 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw"] Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.613503 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649104 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-config\") pod \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649154 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4e7041-93c8-48a9-938a-f7e0815d28c3-serving-cert\") pod \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649175 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-proxy-ca-bundles\") pod \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649234 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-client-ca\") pod \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649284 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d4e7041-93c8-48a9-938a-f7e0815d28c3-tmp\") pod \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649379 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhcc2\" (UniqueName: \"kubernetes.io/projected/9d4e7041-93c8-48a9-938a-f7e0815d28c3-kube-api-access-dhcc2\") pod \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\" (UID: \"9d4e7041-93c8-48a9-938a-f7e0815d28c3\") " Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649474 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c618ec26-75ba-40c0-a198-6e9f3de29f73-tmp\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649499 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-config\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649522 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c618ec26-75ba-40c0-a198-6e9f3de29f73-serving-cert\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649554 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr77x\" (UniqueName: \"kubernetes.io/projected/c618ec26-75ba-40c0-a198-6e9f3de29f73-kube-api-access-nr77x\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649575 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-proxy-ca-bundles\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.649637 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-client-ca\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.650046 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d4e7041-93c8-48a9-938a-f7e0815d28c3" (UID: "9d4e7041-93c8-48a9-938a-f7e0815d28c3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.650041 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4e7041-93c8-48a9-938a-f7e0815d28c3-tmp" (OuterVolumeSpecName: "tmp") pod "9d4e7041-93c8-48a9-938a-f7e0815d28c3" (UID: "9d4e7041-93c8-48a9-938a-f7e0815d28c3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.650538 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-config" (OuterVolumeSpecName: "config") pod "9d4e7041-93c8-48a9-938a-f7e0815d28c3" (UID: "9d4e7041-93c8-48a9-938a-f7e0815d28c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.650583 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9d4e7041-93c8-48a9-938a-f7e0815d28c3" (UID: "9d4e7041-93c8-48a9-938a-f7e0815d28c3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.655118 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4e7041-93c8-48a9-938a-f7e0815d28c3-kube-api-access-dhcc2" (OuterVolumeSpecName: "kube-api-access-dhcc2") pod "9d4e7041-93c8-48a9-938a-f7e0815d28c3" (UID: "9d4e7041-93c8-48a9-938a-f7e0815d28c3"). InnerVolumeSpecName "kube-api-access-dhcc2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.660791 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4e7041-93c8-48a9-938a-f7e0815d28c3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4e7041-93c8-48a9-938a-f7e0815d28c3" (UID: "9d4e7041-93c8-48a9-938a-f7e0815d28c3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751315 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c618ec26-75ba-40c0-a198-6e9f3de29f73-tmp\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751385 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-config\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751417 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c618ec26-75ba-40c0-a198-6e9f3de29f73-serving-cert\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751466 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nr77x\" (UniqueName: \"kubernetes.io/projected/c618ec26-75ba-40c0-a198-6e9f3de29f73-kube-api-access-nr77x\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751497 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-proxy-ca-bundles\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751567 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-client-ca\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751617 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dhcc2\" (UniqueName: \"kubernetes.io/projected/9d4e7041-93c8-48a9-938a-f7e0815d28c3-kube-api-access-dhcc2\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751633 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751645 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4e7041-93c8-48a9-938a-f7e0815d28c3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751656 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751689 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d4e7041-93c8-48a9-938a-f7e0815d28c3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751701 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d4e7041-93c8-48a9-938a-f7e0815d28c3-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.751956 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c618ec26-75ba-40c0-a198-6e9f3de29f73-tmp\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.752998 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-client-ca\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.753172 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-proxy-ca-bundles\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.753191 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c618ec26-75ba-40c0-a198-6e9f3de29f73-config\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.756003 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c618ec26-75ba-40c0-a198-6e9f3de29f73-serving-cert\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.773823 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr77x\" (UniqueName: \"kubernetes.io/projected/c618ec26-75ba-40c0-a198-6e9f3de29f73-kube-api-access-nr77x\") pod \"controller-manager-5c7946bf8b-hvlqw\" (UID: \"c618ec26-75ba-40c0-a198-6e9f3de29f73\") " pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:08 crc kubenswrapper[5099]: I1212 15:28:08.929726 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:09 crc kubenswrapper[5099]: I1212 15:28:09.038758 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" Dec 12 15:28:09 crc kubenswrapper[5099]: I1212 15:28:09.038755 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv" event={"ID":"9d4e7041-93c8-48a9-938a-f7e0815d28c3","Type":"ContainerDied","Data":"00a6f6c84c7258c5a5d57889355b55f597c79c791a0a8ef3e90ede873f847ea2"} Dec 12 15:28:09 crc kubenswrapper[5099]: I1212 15:28:09.038911 5099 scope.go:117] "RemoveContainer" containerID="d347c04e6f18d410a03fccc380d567f20bee38b682e69bf68ebfd76f32518621" Dec 12 15:28:10 crc kubenswrapper[5099]: I1212 15:28:10.072353 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv"] Dec 12 15:28:10 crc kubenswrapper[5099]: I1212 15:28:10.076910 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7ffdd5878f-p2wlv"] Dec 12 15:28:10 crc kubenswrapper[5099]: I1212 15:28:10.142524 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw"] Dec 12 15:28:10 crc kubenswrapper[5099]: W1212 15:28:10.154831 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc618ec26_75ba_40c0_a198_6e9f3de29f73.slice/crio-2b55ecfc27f7493fd1a3edc9893670c06efc623934acbc384c64dbd7188b2a65 WatchSource:0}: Error finding container 2b55ecfc27f7493fd1a3edc9893670c06efc623934acbc384c64dbd7188b2a65: Status 404 returned error can't find the container with id 2b55ecfc27f7493fd1a3edc9893670c06efc623934acbc384c64dbd7188b2a65 Dec 12 15:28:10 crc kubenswrapper[5099]: I1212 15:28:10.474927 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4e7041-93c8-48a9-938a-f7e0815d28c3" path="/var/lib/kubelet/pods/9d4e7041-93c8-48a9-938a-f7e0815d28c3/volumes" Dec 12 15:28:11 crc kubenswrapper[5099]: I1212 15:28:11.082270 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" event={"ID":"c618ec26-75ba-40c0-a198-6e9f3de29f73","Type":"ContainerStarted","Data":"6ca7db11f77ea62bf694e6468ebd7923d01e2b3fe155b65652e10ceca5bb1126"} Dec 12 15:28:11 crc kubenswrapper[5099]: I1212 15:28:11.083436 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" event={"ID":"c618ec26-75ba-40c0-a198-6e9f3de29f73","Type":"ContainerStarted","Data":"2b55ecfc27f7493fd1a3edc9893670c06efc623934acbc384c64dbd7188b2a65"} Dec 12 15:28:11 crc kubenswrapper[5099]: I1212 15:28:11.085915 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:11 crc kubenswrapper[5099]: I1212 15:28:11.104014 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" podStartSLOduration=4.103990469 podStartE2EDuration="4.103990469s" podCreationTimestamp="2025-12-12 15:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:28:11.101852844 +0000 UTC m=+429.205761485" watchObservedRunningTime="2025-12-12 15:28:11.103990469 +0000 UTC m=+429.207899100" Dec 12 15:28:11 crc kubenswrapper[5099]: I1212 15:28:11.492450 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c7946bf8b-hvlqw" Dec 12 15:28:16 crc kubenswrapper[5099]: I1212 15:28:16.515450 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:28:16 crc kubenswrapper[5099]: I1212 15:28:16.516301 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:28:46 crc kubenswrapper[5099]: I1212 15:28:46.515269 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:28:46 crc kubenswrapper[5099]: I1212 15:28:46.516092 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:28:47 crc kubenswrapper[5099]: I1212 15:28:47.780120 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn"] Dec 12 15:28:47 crc kubenswrapper[5099]: I1212 15:28:47.780635 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" podUID="8536091f-d7d2-4684-b7d3-835d8cb05b55" containerName="route-controller-manager" containerID="cri-o://fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2" gracePeriod=30 Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.291501 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.324936 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp"] Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.325670 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8536091f-d7d2-4684-b7d3-835d8cb05b55" containerName="route-controller-manager" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.325693 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="8536091f-d7d2-4684-b7d3-835d8cb05b55" containerName="route-controller-manager" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.325849 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="8536091f-d7d2-4684-b7d3-835d8cb05b55" containerName="route-controller-manager" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.342667 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.344122 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp"] Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.349318 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35afa96-bc4f-4a1b-80e5-2b972d45d042-config\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.349484 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a35afa96-bc4f-4a1b-80e5-2b972d45d042-client-ca\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.450448 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8536091f-d7d2-4684-b7d3-835d8cb05b55-serving-cert\") pod \"8536091f-d7d2-4684-b7d3-835d8cb05b55\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.450793 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8536091f-d7d2-4684-b7d3-835d8cb05b55-tmp\") pod \"8536091f-d7d2-4684-b7d3-835d8cb05b55\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451073 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-config\") pod \"8536091f-d7d2-4684-b7d3-835d8cb05b55\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451107 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-client-ca\") pod \"8536091f-d7d2-4684-b7d3-835d8cb05b55\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451145 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk5vf\" (UniqueName: \"kubernetes.io/projected/8536091f-d7d2-4684-b7d3-835d8cb05b55-kube-api-access-xk5vf\") pod \"8536091f-d7d2-4684-b7d3-835d8cb05b55\" (UID: \"8536091f-d7d2-4684-b7d3-835d8cb05b55\") " Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451243 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35afa96-bc4f-4a1b-80e5-2b972d45d042-serving-cert\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451276 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mxw\" (UniqueName: \"kubernetes.io/projected/a35afa96-bc4f-4a1b-80e5-2b972d45d042-kube-api-access-d9mxw\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451358 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35afa96-bc4f-4a1b-80e5-2b972d45d042-config\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451462 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a35afa96-bc4f-4a1b-80e5-2b972d45d042-client-ca\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451527 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a35afa96-bc4f-4a1b-80e5-2b972d45d042-tmp\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.451578 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8536091f-d7d2-4684-b7d3-835d8cb05b55-tmp" (OuterVolumeSpecName: "tmp") pod "8536091f-d7d2-4684-b7d3-835d8cb05b55" (UID: "8536091f-d7d2-4684-b7d3-835d8cb05b55"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.452269 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-client-ca" (OuterVolumeSpecName: "client-ca") pod "8536091f-d7d2-4684-b7d3-835d8cb05b55" (UID: "8536091f-d7d2-4684-b7d3-835d8cb05b55"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.452288 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-config" (OuterVolumeSpecName: "config") pod "8536091f-d7d2-4684-b7d3-835d8cb05b55" (UID: "8536091f-d7d2-4684-b7d3-835d8cb05b55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.453259 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35afa96-bc4f-4a1b-80e5-2b972d45d042-config\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.453259 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a35afa96-bc4f-4a1b-80e5-2b972d45d042-client-ca\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.465968 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8536091f-d7d2-4684-b7d3-835d8cb05b55-kube-api-access-xk5vf" (OuterVolumeSpecName: "kube-api-access-xk5vf") pod "8536091f-d7d2-4684-b7d3-835d8cb05b55" (UID: "8536091f-d7d2-4684-b7d3-835d8cb05b55"). InnerVolumeSpecName "kube-api-access-xk5vf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.466856 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8536091f-d7d2-4684-b7d3-835d8cb05b55-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8536091f-d7d2-4684-b7d3-835d8cb05b55" (UID: "8536091f-d7d2-4684-b7d3-835d8cb05b55"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.535337 5099 generic.go:358] "Generic (PLEG): container finished" podID="8536091f-d7d2-4684-b7d3-835d8cb05b55" containerID="fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2" exitCode=0 Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.535437 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" event={"ID":"8536091f-d7d2-4684-b7d3-835d8cb05b55","Type":"ContainerDied","Data":"fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2"} Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.535925 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" event={"ID":"8536091f-d7d2-4684-b7d3-835d8cb05b55","Type":"ContainerDied","Data":"53c26760ec3ed123d0ed0d5d742c750bf410ce24ee6e2bc2a1633df64972a5dc"} Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.535967 5099 scope.go:117] "RemoveContainer" containerID="fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.535518 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.552944 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a35afa96-bc4f-4a1b-80e5-2b972d45d042-tmp\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.553015 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35afa96-bc4f-4a1b-80e5-2b972d45d042-serving-cert\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.553078 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mxw\" (UniqueName: \"kubernetes.io/projected/a35afa96-bc4f-4a1b-80e5-2b972d45d042-kube-api-access-d9mxw\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.553346 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8536091f-d7d2-4684-b7d3-835d8cb05b55-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.553366 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8536091f-d7d2-4684-b7d3-835d8cb05b55-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.553377 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.553387 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8536091f-d7d2-4684-b7d3-835d8cb05b55-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.553399 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xk5vf\" (UniqueName: \"kubernetes.io/projected/8536091f-d7d2-4684-b7d3-835d8cb05b55-kube-api-access-xk5vf\") on node \"crc\" DevicePath \"\"" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.553810 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a35afa96-bc4f-4a1b-80e5-2b972d45d042-tmp\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.554517 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn"] Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.558276 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35afa96-bc4f-4a1b-80e5-2b972d45d042-serving-cert\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.561556 5099 scope.go:117] "RemoveContainer" containerID="fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2" Dec 12 15:28:48 crc kubenswrapper[5099]: E1212 15:28:48.562075 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2\": container with ID starting with fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2 not found: ID does not exist" containerID="fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.562109 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2"} err="failed to get container status \"fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2\": rpc error: code = NotFound desc = could not find container \"fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2\": container with ID starting with fdc5f143e3cf774b6e828719e0758f52cde354892f74a09a1c650e48b42545d2 not found: ID does not exist" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.565561 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5d46c6b7-hxvjn"] Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.571039 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mxw\" (UniqueName: \"kubernetes.io/projected/a35afa96-bc4f-4a1b-80e5-2b972d45d042-kube-api-access-d9mxw\") pod \"route-controller-manager-67968f59d6-ct7kp\" (UID: \"a35afa96-bc4f-4a1b-80e5-2b972d45d042\") " pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:48 crc kubenswrapper[5099]: I1212 15:28:48.673614 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:49 crc kubenswrapper[5099]: I1212 15:28:49.125959 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp"] Dec 12 15:28:49 crc kubenswrapper[5099]: I1212 15:28:49.545568 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" event={"ID":"a35afa96-bc4f-4a1b-80e5-2b972d45d042","Type":"ContainerStarted","Data":"38a361498c76e9b4e8c22b854d240cd5a287ea4143aafaac34a54436e6a306bb"} Dec 12 15:28:49 crc kubenswrapper[5099]: I1212 15:28:49.546122 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" event={"ID":"a35afa96-bc4f-4a1b-80e5-2b972d45d042","Type":"ContainerStarted","Data":"f78baf14eea88c0d59f8dcebad95d30bdcadd39ded0334c03d1ee9c3410ae5d8"} Dec 12 15:28:49 crc kubenswrapper[5099]: I1212 15:28:49.546172 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:49 crc kubenswrapper[5099]: I1212 15:28:49.563682 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" podStartSLOduration=2.5636643599999998 podStartE2EDuration="2.56366436s" podCreationTimestamp="2025-12-12 15:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:28:49.561887644 +0000 UTC m=+467.665796305" watchObservedRunningTime="2025-12-12 15:28:49.56366436 +0000 UTC m=+467.667573011" Dec 12 15:28:49 crc kubenswrapper[5099]: I1212 15:28:49.802750 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67968f59d6-ct7kp" Dec 12 15:28:50 crc kubenswrapper[5099]: I1212 15:28:50.477515 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8536091f-d7d2-4684-b7d3-835d8cb05b55" path="/var/lib/kubelet/pods/8536091f-d7d2-4684-b7d3-835d8cb05b55/volumes" Dec 12 15:29:12 crc kubenswrapper[5099]: I1212 15:29:12.688969 5099 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:29:12 crc kubenswrapper[5099]: I1212 15:29:12.795190 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-gmbdh"] Dec 12 15:29:16 crc kubenswrapper[5099]: I1212 15:29:16.523515 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:29:16 crc kubenswrapper[5099]: I1212 15:29:16.524053 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:29:16 crc kubenswrapper[5099]: I1212 15:29:16.524297 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:29:16 crc kubenswrapper[5099]: I1212 15:29:16.525542 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fdeb729c4f065f4c9cb140219a15931967f589a6d5c6c791404fed72f77f20b"} pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:29:16 crc kubenswrapper[5099]: I1212 15:29:16.525626 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" containerID="cri-o://0fdeb729c4f065f4c9cb140219a15931967f589a6d5c6c791404fed72f77f20b" gracePeriod=600 Dec 12 15:29:16 crc kubenswrapper[5099]: I1212 15:29:16.705305 5099 generic.go:358] "Generic (PLEG): container finished" podID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerID="0fdeb729c4f065f4c9cb140219a15931967f589a6d5c6c791404fed72f77f20b" exitCode=0 Dec 12 15:29:16 crc kubenswrapper[5099]: I1212 15:29:16.705386 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerDied","Data":"0fdeb729c4f065f4c9cb140219a15931967f589a6d5c6c791404fed72f77f20b"} Dec 12 15:29:16 crc kubenswrapper[5099]: I1212 15:29:16.706058 5099 scope.go:117] "RemoveContainer" containerID="083cdafdeff5eefadf2c78beacb4a231fefe181de777a1665ddc767a6f089e14" Dec 12 15:29:17 crc kubenswrapper[5099]: I1212 15:29:17.714830 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"a4f55d53d74f97d2f7e29a4b2345a51f4b33eca00c342b37de40618eff52b12a"} Dec 12 15:29:36 crc kubenswrapper[5099]: I1212 15:29:36.722448 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58528: no serving certificate available for the kubelet" Dec 12 15:29:37 crc kubenswrapper[5099]: I1212 15:29:37.947826 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" podUID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" containerName="oauth-openshift" containerID="cri-o://d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70" gracePeriod=15 Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.448928 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.609933 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7fb8665846-glsht"] Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.610606 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" containerName="oauth-openshift" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.610634 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" containerName="oauth-openshift" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.610795 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" containerName="oauth-openshift" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.615636 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.629037 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7fb8665846-glsht"] Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.693061 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-serving-cert\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.693164 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-session\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.693239 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-login\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.693480 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-error\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.693778 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-provider-selection\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694099 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-dir\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694143 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-ocp-branding-template\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694184 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-idp-0-file-data\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694216 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwnhv\" (UniqueName: \"kubernetes.io/projected/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-kube-api-access-vwnhv\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694253 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-cliconfig\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694267 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-trusted-ca-bundle\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694282 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-router-certs\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694299 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-service-ca\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694352 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-policies\") pod \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\" (UID: \"5002e2d3-b94c-4d2d-a391-9f84e63ffd20\") " Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694211 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.694448 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-login\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695000 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695151 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695296 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695329 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-error\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695373 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695423 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6n7z\" (UniqueName: \"kubernetes.io/projected/99682a78-83ed-474d-99fe-a76a39d12e54-kube-api-access-v6n7z\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695458 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695487 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-audit-policies\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695509 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695549 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99682a78-83ed-474d-99fe-a76a39d12e54-audit-dir\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695592 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-session\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695633 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695717 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.695889 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.696161 5099 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.696172 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.696815 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.696984 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.699955 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.700202 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.700331 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.700510 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.700820 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.700921 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-kube-api-access-vwnhv" (OuterVolumeSpecName: "kube-api-access-vwnhv") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "kube-api-access-vwnhv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.701415 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.701485 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.701693 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5002e2d3-b94c-4d2d-a391-9f84e63ffd20" (UID: "5002e2d3-b94c-4d2d-a391-9f84e63ffd20"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797246 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797325 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-login\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797350 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797717 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797787 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797816 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-error\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797872 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797910 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6n7z\" (UniqueName: \"kubernetes.io/projected/99682a78-83ed-474d-99fe-a76a39d12e54-kube-api-access-v6n7z\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797955 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.797982 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-audit-policies\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.798032 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.798107 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99682a78-83ed-474d-99fe-a76a39d12e54-audit-dir\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.798148 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-session\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.798208 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.798357 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.800960 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.799159 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.799600 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.799691 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99682a78-83ed-474d-99fe-a76a39d12e54-audit-dir\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.800989 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801057 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801071 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwnhv\" (UniqueName: \"kubernetes.io/projected/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-kube-api-access-vwnhv\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801109 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.800062 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-audit-policies\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.799203 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801125 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801424 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801441 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801455 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801516 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801536 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.801570 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5002e2d3-b94c-4d2d-a391-9f84e63ffd20-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.804010 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.804281 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.804822 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.804831 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-error\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.805228 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.805623 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-user-template-login\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.806214 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-session\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.812275 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/99682a78-83ed-474d-99fe-a76a39d12e54-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.816278 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6n7z\" (UniqueName: \"kubernetes.io/projected/99682a78-83ed-474d-99fe-a76a39d12e54-kube-api-access-v6n7z\") pod \"oauth-openshift-7fb8665846-glsht\" (UID: \"99682a78-83ed-474d-99fe-a76a39d12e54\") " pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.927720 5099 generic.go:358] "Generic (PLEG): container finished" podID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" containerID="d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70" exitCode=0 Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.927864 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" event={"ID":"5002e2d3-b94c-4d2d-a391-9f84e63ffd20","Type":"ContainerDied","Data":"d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70"} Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.927935 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.927966 5099 scope.go:117] "RemoveContainer" containerID="d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.927944 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-gmbdh" event={"ID":"5002e2d3-b94c-4d2d-a391-9f84e63ffd20","Type":"ContainerDied","Data":"25701d057f5a7388a24f6bc637febcaa24fd0fdf2f6b7851cfdfabe1a6edce24"} Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.944896 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.948814 5099 scope.go:117] "RemoveContainer" containerID="d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70" Dec 12 15:29:38 crc kubenswrapper[5099]: E1212 15:29:38.949201 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70\": container with ID starting with d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70 not found: ID does not exist" containerID="d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.949236 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70"} err="failed to get container status \"d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70\": rpc error: code = NotFound desc = could not find container \"d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70\": container with ID starting with d6f13adf739306ed4b18c657a6b18d4d707870445b5c316b105275186b2c5e70 not found: ID does not exist" Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.960041 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-gmbdh"] Dec 12 15:29:38 crc kubenswrapper[5099]: I1212 15:29:38.964631 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-gmbdh"] Dec 12 15:29:39 crc kubenswrapper[5099]: I1212 15:29:39.400389 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7fb8665846-glsht"] Dec 12 15:29:39 crc kubenswrapper[5099]: I1212 15:29:39.936009 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" event={"ID":"99682a78-83ed-474d-99fe-a76a39d12e54","Type":"ContainerStarted","Data":"02b2beaf895a3c03d1becec5faf892e849e10355a7ef8743de5317fa77c90c93"} Dec 12 15:29:39 crc kubenswrapper[5099]: I1212 15:29:39.936434 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:39 crc kubenswrapper[5099]: I1212 15:29:39.936452 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" event={"ID":"99682a78-83ed-474d-99fe-a76a39d12e54","Type":"ContainerStarted","Data":"3cb614f6bd14450fb72d180bcf6f91b78640bd6f1b808366a28e47f47eb67ef9"} Dec 12 15:29:39 crc kubenswrapper[5099]: I1212 15:29:39.963058 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" podStartSLOduration=27.963011032 podStartE2EDuration="27.963011032s" podCreationTimestamp="2025-12-12 15:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:29:39.960653681 +0000 UTC m=+518.064562322" watchObservedRunningTime="2025-12-12 15:29:39.963011032 +0000 UTC m=+518.066919703" Dec 12 15:29:40 crc kubenswrapper[5099]: I1212 15:29:40.310807 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7fb8665846-glsht" Dec 12 15:29:40 crc kubenswrapper[5099]: I1212 15:29:40.474909 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5002e2d3-b94c-4d2d-a391-9f84e63ffd20" path="/var/lib/kubelet/pods/5002e2d3-b94c-4d2d-a391-9f84e63ffd20/volumes" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.164547 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr"] Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.240823 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr"] Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.241060 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.243639 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.244527 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.326875 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrr2m\" (UniqueName: \"kubernetes.io/projected/7d308fcd-92d9-4da3-9ac5-270ad1933381-kube-api-access-rrr2m\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.326948 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d308fcd-92d9-4da3-9ac5-270ad1933381-secret-volume\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.327052 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d308fcd-92d9-4da3-9ac5-270ad1933381-config-volume\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.428325 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d308fcd-92d9-4da3-9ac5-270ad1933381-config-volume\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.428466 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrr2m\" (UniqueName: \"kubernetes.io/projected/7d308fcd-92d9-4da3-9ac5-270ad1933381-kube-api-access-rrr2m\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.428502 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d308fcd-92d9-4da3-9ac5-270ad1933381-secret-volume\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.429513 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d308fcd-92d9-4da3-9ac5-270ad1933381-config-volume\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.445793 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d308fcd-92d9-4da3-9ac5-270ad1933381-secret-volume\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.455287 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrr2m\" (UniqueName: \"kubernetes.io/projected/7d308fcd-92d9-4da3-9ac5-270ad1933381-kube-api-access-rrr2m\") pod \"collect-profiles-29425890-52qlr\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:00 crc kubenswrapper[5099]: I1212 15:30:00.588892 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:01 crc kubenswrapper[5099]: I1212 15:30:01.105376 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr"] Dec 12 15:30:02 crc kubenswrapper[5099]: I1212 15:30:02.099861 5099 generic.go:358] "Generic (PLEG): container finished" podID="7d308fcd-92d9-4da3-9ac5-270ad1933381" containerID="de3680cf144d6da8aec76b9041fec73d3b8a9f4e34ce257d3f2f592c83fee310" exitCode=0 Dec 12 15:30:02 crc kubenswrapper[5099]: I1212 15:30:02.099936 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" event={"ID":"7d308fcd-92d9-4da3-9ac5-270ad1933381","Type":"ContainerDied","Data":"de3680cf144d6da8aec76b9041fec73d3b8a9f4e34ce257d3f2f592c83fee310"} Dec 12 15:30:02 crc kubenswrapper[5099]: I1212 15:30:02.100342 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" event={"ID":"7d308fcd-92d9-4da3-9ac5-270ad1933381","Type":"ContainerStarted","Data":"f64ff586c5733c351abd2092db64cacbc6c7223ff0206da514f7f00cc6fb75ab"} Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.361679 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.460825 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d308fcd-92d9-4da3-9ac5-270ad1933381-secret-volume\") pod \"7d308fcd-92d9-4da3-9ac5-270ad1933381\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.461131 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrr2m\" (UniqueName: \"kubernetes.io/projected/7d308fcd-92d9-4da3-9ac5-270ad1933381-kube-api-access-rrr2m\") pod \"7d308fcd-92d9-4da3-9ac5-270ad1933381\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.461213 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d308fcd-92d9-4da3-9ac5-270ad1933381-config-volume\") pod \"7d308fcd-92d9-4da3-9ac5-270ad1933381\" (UID: \"7d308fcd-92d9-4da3-9ac5-270ad1933381\") " Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.462096 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d308fcd-92d9-4da3-9ac5-270ad1933381-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d308fcd-92d9-4da3-9ac5-270ad1933381" (UID: "7d308fcd-92d9-4da3-9ac5-270ad1933381"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.462479 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d308fcd-92d9-4da3-9ac5-270ad1933381-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.474847 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d308fcd-92d9-4da3-9ac5-270ad1933381-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d308fcd-92d9-4da3-9ac5-270ad1933381" (UID: "7d308fcd-92d9-4da3-9ac5-270ad1933381"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.474924 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d308fcd-92d9-4da3-9ac5-270ad1933381-kube-api-access-rrr2m" (OuterVolumeSpecName: "kube-api-access-rrr2m") pod "7d308fcd-92d9-4da3-9ac5-270ad1933381" (UID: "7d308fcd-92d9-4da3-9ac5-270ad1933381"). InnerVolumeSpecName "kube-api-access-rrr2m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.563240 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrr2m\" (UniqueName: \"kubernetes.io/projected/7d308fcd-92d9-4da3-9ac5-270ad1933381-kube-api-access-rrr2m\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:03 crc kubenswrapper[5099]: I1212 15:30:03.563285 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d308fcd-92d9-4da3-9ac5-270ad1933381-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:04 crc kubenswrapper[5099]: I1212 15:30:04.121178 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" event={"ID":"7d308fcd-92d9-4da3-9ac5-270ad1933381","Type":"ContainerDied","Data":"f64ff586c5733c351abd2092db64cacbc6c7223ff0206da514f7f00cc6fb75ab"} Dec 12 15:30:04 crc kubenswrapper[5099]: I1212 15:30:04.121220 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f64ff586c5733c351abd2092db64cacbc6c7223ff0206da514f7f00cc6fb75ab" Dec 12 15:30:04 crc kubenswrapper[5099]: I1212 15:30:04.121200 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-52qlr" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.177410 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmkwr"] Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.178538 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xmkwr" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerName="registry-server" containerID="cri-o://7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759" gracePeriod=30 Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.185214 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqx68"] Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.185601 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zqx68" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="registry-server" containerID="cri-o://e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd" gracePeriod=30 Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.198982 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmnlp"] Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.199927 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" containerID="cri-o://f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148" gracePeriod=30 Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.215010 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfqhr"] Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.215518 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sfqhr" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" containerName="registry-server" containerID="cri-o://47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01" gracePeriod=30 Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.229050 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nx984"] Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.229408 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nx984" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerName="registry-server" containerID="cri-o://5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5" gracePeriod=30 Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.243761 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-jm7zs"] Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.244691 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d308fcd-92d9-4da3-9ac5-270ad1933381" containerName="collect-profiles" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.244721 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d308fcd-92d9-4da3-9ac5-270ad1933381" containerName="collect-profiles" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.244995 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d308fcd-92d9-4da3-9ac5-270ad1933381" containerName="collect-profiles" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.391731 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-jm7zs"] Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.391870 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.425972 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dac1b54a-1cf3-4392-9eb2-678a19376a00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.426152 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dac1b54a-1cf3-4392-9eb2-678a19376a00-tmp\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.426188 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dac1b54a-1cf3-4392-9eb2-678a19376a00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.426216 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz8kh\" (UniqueName: \"kubernetes.io/projected/dac1b54a-1cf3-4392-9eb2-678a19376a00-kube-api-access-jz8kh\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.527432 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dac1b54a-1cf3-4392-9eb2-678a19376a00-tmp\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.527493 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dac1b54a-1cf3-4392-9eb2-678a19376a00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.527535 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jz8kh\" (UniqueName: \"kubernetes.io/projected/dac1b54a-1cf3-4392-9eb2-678a19376a00-kube-api-access-jz8kh\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.527600 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dac1b54a-1cf3-4392-9eb2-678a19376a00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.528317 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dac1b54a-1cf3-4392-9eb2-678a19376a00-tmp\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.529204 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dac1b54a-1cf3-4392-9eb2-678a19376a00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.534649 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dac1b54a-1cf3-4392-9eb2-678a19376a00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.545956 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz8kh\" (UniqueName: \"kubernetes.io/projected/dac1b54a-1cf3-4392-9eb2-678a19376a00-kube-api-access-jz8kh\") pod \"marketplace-operator-547dbd544d-jm7zs\" (UID: \"dac1b54a-1cf3-4392-9eb2-678a19376a00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:39 crc kubenswrapper[5099]: I1212 15:30:39.720730 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.007356 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-jm7zs"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.106076 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.193450 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-catalog-content\") pod \"91162a66-bdaa-4786-ad25-bde12241ebae\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.194141 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqgn\" (UniqueName: \"kubernetes.io/projected/91162a66-bdaa-4786-ad25-bde12241ebae-kube-api-access-gpqgn\") pod \"91162a66-bdaa-4786-ad25-bde12241ebae\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.194222 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-utilities\") pod \"91162a66-bdaa-4786-ad25-bde12241ebae\" (UID: \"91162a66-bdaa-4786-ad25-bde12241ebae\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.208959 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-utilities" (OuterVolumeSpecName: "utilities") pod "91162a66-bdaa-4786-ad25-bde12241ebae" (UID: "91162a66-bdaa-4786-ad25-bde12241ebae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.218548 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91162a66-bdaa-4786-ad25-bde12241ebae" (UID: "91162a66-bdaa-4786-ad25-bde12241ebae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.232208 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91162a66-bdaa-4786-ad25-bde12241ebae-kube-api-access-gpqgn" (OuterVolumeSpecName: "kube-api-access-gpqgn") pod "91162a66-bdaa-4786-ad25-bde12241ebae" (UID: "91162a66-bdaa-4786-ad25-bde12241ebae"). InnerVolumeSpecName "kube-api-access-gpqgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.254546 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.277913 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.281896 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-fmnlp_dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/marketplace-operator/2.log" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.281957 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.285897 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.299404 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca\") pod \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.299507 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-catalog-content\") pod \"50cbce4d-a234-4a2e-b683-8ecf21d93474\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.299595 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics\") pod \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.299657 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6whh\" (UniqueName: \"kubernetes.io/projected/7becc184-0a0c-4a25-919f-6359f1da964e-kube-api-access-b6whh\") pod \"7becc184-0a0c-4a25-919f-6359f1da964e\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.299706 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prddd\" (UniqueName: \"kubernetes.io/projected/0ec8ac77-8f80-4a46-b769-37952a91485c-kube-api-access-prddd\") pod \"0ec8ac77-8f80-4a46-b769-37952a91485c\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.299952 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-utilities\") pod \"50cbce4d-a234-4a2e-b683-8ecf21d93474\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.299977 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq6gq\" (UniqueName: \"kubernetes.io/projected/50cbce4d-a234-4a2e-b683-8ecf21d93474-kube-api-access-jq6gq\") pod \"50cbce4d-a234-4a2e-b683-8ecf21d93474\" (UID: \"50cbce4d-a234-4a2e-b683-8ecf21d93474\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300007 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-catalog-content\") pod \"7becc184-0a0c-4a25-919f-6359f1da964e\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300060 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-utilities\") pod \"0ec8ac77-8f80-4a46-b769-37952a91485c\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300098 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-utilities\") pod \"7becc184-0a0c-4a25-919f-6359f1da964e\" (UID: \"7becc184-0a0c-4a25-919f-6359f1da964e\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300129 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkbgn\" (UniqueName: \"kubernetes.io/projected/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-kube-api-access-tkbgn\") pod \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300195 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-tmp\") pod \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\" (UID: \"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300272 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-catalog-content\") pod \"0ec8ac77-8f80-4a46-b769-37952a91485c\" (UID: \"0ec8ac77-8f80-4a46-b769-37952a91485c\") " Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300637 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300681 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91162a66-bdaa-4786-ad25-bde12241ebae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.300697 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gpqgn\" (UniqueName: \"kubernetes.io/projected/91162a66-bdaa-4786-ad25-bde12241ebae-kube-api-access-gpqgn\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.303115 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-utilities" (OuterVolumeSpecName: "utilities") pod "50cbce4d-a234-4a2e-b683-8ecf21d93474" (UID: "50cbce4d-a234-4a2e-b683-8ecf21d93474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.303404 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-utilities" (OuterVolumeSpecName: "utilities") pod "0ec8ac77-8f80-4a46-b769-37952a91485c" (UID: "0ec8ac77-8f80-4a46-b769-37952a91485c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.304221 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" (UID: "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.311547 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cbce4d-a234-4a2e-b683-8ecf21d93474-kube-api-access-jq6gq" (OuterVolumeSpecName: "kube-api-access-jq6gq") pod "50cbce4d-a234-4a2e-b683-8ecf21d93474" (UID: "50cbce4d-a234-4a2e-b683-8ecf21d93474"). InnerVolumeSpecName "kube-api-access-jq6gq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.311713 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-tmp" (OuterVolumeSpecName: "tmp") pod "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" (UID: "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.312920 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-utilities" (OuterVolumeSpecName: "utilities") pod "7becc184-0a0c-4a25-919f-6359f1da964e" (UID: "7becc184-0a0c-4a25-919f-6359f1da964e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.319537 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-kube-api-access-tkbgn" (OuterVolumeSpecName: "kube-api-access-tkbgn") pod "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" (UID: "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d"). InnerVolumeSpecName "kube-api-access-tkbgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.319718 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec8ac77-8f80-4a46-b769-37952a91485c-kube-api-access-prddd" (OuterVolumeSpecName: "kube-api-access-prddd") pod "0ec8ac77-8f80-4a46-b769-37952a91485c" (UID: "0ec8ac77-8f80-4a46-b769-37952a91485c"). InnerVolumeSpecName "kube-api-access-prddd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.324282 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" (UID: "dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.336010 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7becc184-0a0c-4a25-919f-6359f1da964e-kube-api-access-b6whh" (OuterVolumeSpecName: "kube-api-access-b6whh") pod "7becc184-0a0c-4a25-919f-6359f1da964e" (UID: "7becc184-0a0c-4a25-919f-6359f1da964e"). InnerVolumeSpecName "kube-api-access-b6whh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.360871 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50cbce4d-a234-4a2e-b683-8ecf21d93474" (UID: "50cbce4d-a234-4a2e-b683-8ecf21d93474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402714 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402786 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq6gq\" (UniqueName: \"kubernetes.io/projected/50cbce4d-a234-4a2e-b683-8ecf21d93474-kube-api-access-jq6gq\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402804 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402816 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402828 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkbgn\" (UniqueName: \"kubernetes.io/projected/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-kube-api-access-tkbgn\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402867 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402880 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402891 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cbce4d-a234-4a2e-b683-8ecf21d93474-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402903 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402942 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b6whh\" (UniqueName: \"kubernetes.io/projected/7becc184-0a0c-4a25-919f-6359f1da964e-kube-api-access-b6whh\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.402958 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-prddd\" (UniqueName: \"kubernetes.io/projected/0ec8ac77-8f80-4a46-b769-37952a91485c-kube-api-access-prddd\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.409495 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-fmnlp_dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/marketplace-operator/2.log" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.409570 5099 generic.go:358] "Generic (PLEG): container finished" podID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerID="f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148" exitCode=0 Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.409834 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" event={"ID":"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d","Type":"ContainerDied","Data":"f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.409878 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.409905 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fmnlp" event={"ID":"dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d","Type":"ContainerDied","Data":"6c9bc29bcf39d6878396dcd6ca8ac3d1138c9c2c82684433626f432cbc726b52"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.409932 5099 scope.go:117] "RemoveContainer" containerID="f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.414374 5099 generic.go:358] "Generic (PLEG): container finished" podID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerID="7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759" exitCode=0 Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.414587 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmkwr" event={"ID":"50cbce4d-a234-4a2e-b683-8ecf21d93474","Type":"ContainerDied","Data":"7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.414689 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmkwr" event={"ID":"50cbce4d-a234-4a2e-b683-8ecf21d93474","Type":"ContainerDied","Data":"6a9785249a7fc266214a8c4aa71befe83ab5a307c37643391ef2a478ad5a3486"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.414958 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmkwr" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.420446 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfqhr" event={"ID":"91162a66-bdaa-4786-ad25-bde12241ebae","Type":"ContainerDied","Data":"47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.420480 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sfqhr" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.420346 5099 generic.go:358] "Generic (PLEG): container finished" podID="91162a66-bdaa-4786-ad25-bde12241ebae" containerID="47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01" exitCode=0 Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.441159 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfqhr" event={"ID":"91162a66-bdaa-4786-ad25-bde12241ebae","Type":"ContainerDied","Data":"58cf2f2ac667ee9b915837197f6a51bacbd619d0bc1e8996ab6efa2bc16995d7"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.452246 5099 generic.go:358] "Generic (PLEG): container finished" podID="7becc184-0a0c-4a25-919f-6359f1da964e" containerID="e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd" exitCode=0 Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.452310 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqx68" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.452368 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqx68" event={"ID":"7becc184-0a0c-4a25-919f-6359f1da964e","Type":"ContainerDied","Data":"e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.452426 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqx68" event={"ID":"7becc184-0a0c-4a25-919f-6359f1da964e","Type":"ContainerDied","Data":"7fd1f7f605d1c63a57f09b24e9def452ba73688b27c0ccfa031008393b950951"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.455827 5099 scope.go:117] "RemoveContainer" containerID="430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.458744 5099 generic.go:358] "Generic (PLEG): container finished" podID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerID="5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5" exitCode=0 Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.458809 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx984" event={"ID":"0ec8ac77-8f80-4a46-b769-37952a91485c","Type":"ContainerDied","Data":"5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.458840 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx984" event={"ID":"0ec8ac77-8f80-4a46-b769-37952a91485c","Type":"ContainerDied","Data":"50b2e2cd38c0ea5ca00fe360b91959c8672d54653cf80ccfd9ab5d64c7ed9dc5"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.458920 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx984" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.460191 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmnlp"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.461636 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" event={"ID":"dac1b54a-1cf3-4392-9eb2-678a19376a00","Type":"ContainerStarted","Data":"c2934f11cdbe8e37d047eee2e221aad40f09311bc667dd977351fcee4792fe31"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.461689 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" event={"ID":"dac1b54a-1cf3-4392-9eb2-678a19376a00","Type":"ContainerStarted","Data":"c6481d0823b56c7acf67e5d3a5139fd6e7138611a13e39201d63b3db6193833d"} Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.463153 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.464576 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-jm7zs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": dial tcp 10.217.0.72:8080: connect: connection refused" start-of-body= Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.464838 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" podUID="dac1b54a-1cf3-4392-9eb2-678a19376a00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": dial tcp 10.217.0.72:8080: connect: connection refused" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.476716 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7becc184-0a0c-4a25-919f-6359f1da964e" (UID: "7becc184-0a0c-4a25-919f-6359f1da964e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.486342 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" podStartSLOduration=1.486317889 podStartE2EDuration="1.486317889s" podCreationTimestamp="2025-12-12 15:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:30:40.481265989 +0000 UTC m=+578.585174630" watchObservedRunningTime="2025-12-12 15:30:40.486317889 +0000 UTC m=+578.590226530" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.503167 5099 scope.go:117] "RemoveContainer" containerID="f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.504351 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7becc184-0a0c-4a25-919f-6359f1da964e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.504810 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148\": container with ID starting with f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148 not found: ID does not exist" containerID="f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.504864 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148"} err="failed to get container status \"f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148\": rpc error: code = NotFound desc = could not find container \"f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148\": container with ID starting with f865040c7a8d97f791247cb269595befb1ac314d5e645bd3dfadc65da0a69148 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.504893 5099 scope.go:117] "RemoveContainer" containerID="430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.507052 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fmnlp"] Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.507229 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c\": container with ID starting with 430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c not found: ID does not exist" containerID="430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.507286 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c"} err="failed to get container status \"430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c\": rpc error: code = NotFound desc = could not find container \"430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c\": container with ID starting with 430225f419f0464b8942c5f3bb3d6e92af747ed88bee378800825b80ce6cc63c not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.507319 5099 scope.go:117] "RemoveContainer" containerID="7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.512351 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmkwr"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.521482 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xmkwr"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.526367 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfqhr"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.526756 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ec8ac77-8f80-4a46-b769-37952a91485c" (UID: "0ec8ac77-8f80-4a46-b769-37952a91485c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.534178 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfqhr"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.538749 5099 scope.go:117] "RemoveContainer" containerID="d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.568690 5099 scope.go:117] "RemoveContainer" containerID="6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.587862 5099 scope.go:117] "RemoveContainer" containerID="7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.588542 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759\": container with ID starting with 7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759 not found: ID does not exist" containerID="7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.588578 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759"} err="failed to get container status \"7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759\": rpc error: code = NotFound desc = could not find container \"7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759\": container with ID starting with 7f25c9b88b35d546c34f854a2850ce100f57f7a2294a807eae59152913aae759 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.588612 5099 scope.go:117] "RemoveContainer" containerID="d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.589143 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43\": container with ID starting with d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43 not found: ID does not exist" containerID="d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.589166 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43"} err="failed to get container status \"d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43\": rpc error: code = NotFound desc = could not find container \"d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43\": container with ID starting with d86dfd48aad406143474bca8513bc7876c882439059b6dddf07773fd4198dd43 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.589182 5099 scope.go:117] "RemoveContainer" containerID="6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.589564 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577\": container with ID starting with 6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577 not found: ID does not exist" containerID="6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.589585 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577"} err="failed to get container status \"6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577\": rpc error: code = NotFound desc = could not find container \"6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577\": container with ID starting with 6f124120a030f637be21b1d2d31a8f9247f9b326e901b310e3e49ce57b00e577 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.589605 5099 scope.go:117] "RemoveContainer" containerID="47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.606687 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec8ac77-8f80-4a46-b769-37952a91485c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.606836 5099 scope.go:117] "RemoveContainer" containerID="a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.626775 5099 scope.go:117] "RemoveContainer" containerID="a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.645023 5099 scope.go:117] "RemoveContainer" containerID="47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.645945 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01\": container with ID starting with 47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01 not found: ID does not exist" containerID="47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.645980 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01"} err="failed to get container status \"47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01\": rpc error: code = NotFound desc = could not find container \"47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01\": container with ID starting with 47c51d78f3574441f2637dbb039d6d93e486ff0dd82f0cd0793b035c3e2eef01 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.646005 5099 scope.go:117] "RemoveContainer" containerID="a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.646362 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4\": container with ID starting with a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4 not found: ID does not exist" containerID="a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.646396 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4"} err="failed to get container status \"a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4\": rpc error: code = NotFound desc = could not find container \"a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4\": container with ID starting with a11efc1eb5aaa4d9e6fb154a7f7a2ab9584d5f7d071a104e3db7dac45eb63fc4 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.646415 5099 scope.go:117] "RemoveContainer" containerID="a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.647389 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7\": container with ID starting with a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7 not found: ID does not exist" containerID="a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.647419 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7"} err="failed to get container status \"a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7\": rpc error: code = NotFound desc = could not find container \"a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7\": container with ID starting with a7ee75727f3543ff7a5856c6c8dfd170ebf66c00fa511054cb25726098c737f7 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.647439 5099 scope.go:117] "RemoveContainer" containerID="e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.663753 5099 scope.go:117] "RemoveContainer" containerID="fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.685606 5099 scope.go:117] "RemoveContainer" containerID="db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.700977 5099 scope.go:117] "RemoveContainer" containerID="e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.701468 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd\": container with ID starting with e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd not found: ID does not exist" containerID="e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.701503 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd"} err="failed to get container status \"e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd\": rpc error: code = NotFound desc = could not find container \"e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd\": container with ID starting with e378ac765b915bef4bc54a163111029ac8876e32240fde2c5c18188642ba15dd not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.701523 5099 scope.go:117] "RemoveContainer" containerID="fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.703249 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b\": container with ID starting with fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b not found: ID does not exist" containerID="fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.703311 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b"} err="failed to get container status \"fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b\": rpc error: code = NotFound desc = could not find container \"fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b\": container with ID starting with fe51a45e3074c1fdf0c666e7a24ea443f3bf1f86093902a5c3886d150ed88d0b not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.703330 5099 scope.go:117] "RemoveContainer" containerID="db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.703711 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02\": container with ID starting with db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02 not found: ID does not exist" containerID="db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.703743 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02"} err="failed to get container status \"db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02\": rpc error: code = NotFound desc = could not find container \"db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02\": container with ID starting with db4a12710f1159125f5d0a2f2aed0176e78afc4a698367754fe109fd610d3b02 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.703763 5099 scope.go:117] "RemoveContainer" containerID="5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.721619 5099 scope.go:117] "RemoveContainer" containerID="1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.742737 5099 scope.go:117] "RemoveContainer" containerID="cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.759897 5099 scope.go:117] "RemoveContainer" containerID="5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.760630 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5\": container with ID starting with 5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5 not found: ID does not exist" containerID="5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.762117 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5"} err="failed to get container status \"5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5\": rpc error: code = NotFound desc = could not find container \"5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5\": container with ID starting with 5244e921a7cb09dc547e96582160e0b9a0b0c236250df16a3f971c27fd3b69c5 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.762181 5099 scope.go:117] "RemoveContainer" containerID="1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.762566 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e\": container with ID starting with 1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e not found: ID does not exist" containerID="1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.762595 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e"} err="failed to get container status \"1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e\": rpc error: code = NotFound desc = could not find container \"1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e\": container with ID starting with 1cb9054ecc91994b303afa6ac4a9452edbdb6cc77aabf1c767807679e0ce049e not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.762617 5099 scope.go:117] "RemoveContainer" containerID="cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874" Dec 12 15:30:40 crc kubenswrapper[5099]: E1212 15:30:40.763149 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874\": container with ID starting with cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874 not found: ID does not exist" containerID="cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.763204 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874"} err="failed to get container status \"cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874\": rpc error: code = NotFound desc = could not find container \"cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874\": container with ID starting with cf1069c450304968537c0886cd7264ec09789fa6bc8ca86cdf7e52f9e4c01874 not found: ID does not exist" Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.771764 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqx68"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.780038 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zqx68"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.795525 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nx984"] Dec 12 15:30:40 crc kubenswrapper[5099]: I1212 15:30:40.800625 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nx984"] Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.393876 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bt9c8"] Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394382 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394394 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394405 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" containerName="extract-utilities" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394410 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" containerName="extract-utilities" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394422 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394427 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394436 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerName="extract-content" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394442 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerName="extract-content" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394450 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394455 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394463 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerName="extract-utilities" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394468 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerName="extract-utilities" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394476 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerName="extract-utilities" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394481 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerName="extract-utilities" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394489 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394494 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394501 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="extract-utilities" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394506 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="extract-utilities" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394512 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" containerName="extract-content" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394517 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" containerName="extract-content" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394522 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394527 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394534 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="extract-content" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394539 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="extract-content" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394545 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394551 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394558 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerName="extract-content" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394563 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerName="extract-content" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394570 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394576 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394677 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394687 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394696 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394705 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394712 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394720 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394727 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394733 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" containerName="registry-server" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394804 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.394811 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" containerName="marketplace-operator" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.415887 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt9c8"] Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.416046 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.419257 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.492636 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-jm7zs" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.554342 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htfs\" (UniqueName: \"kubernetes.io/projected/3abb2de8-01ab-4808-86ec-35fba75a4cfc-kube-api-access-6htfs\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.554421 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-catalog-content\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.554580 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-utilities\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.593483 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ntm8"] Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.599296 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.603225 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.606317 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ntm8"] Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.655567 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6htfs\" (UniqueName: \"kubernetes.io/projected/3abb2de8-01ab-4808-86ec-35fba75a4cfc-kube-api-access-6htfs\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.655622 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpccx\" (UniqueName: \"kubernetes.io/projected/64bc492d-434f-42f3-a917-bf9ca5ff8e78-kube-api-access-gpccx\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.655652 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-catalog-content\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.655692 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64bc492d-434f-42f3-a917-bf9ca5ff8e78-catalog-content\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.655718 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-utilities\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.656199 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64bc492d-434f-42f3-a917-bf9ca5ff8e78-utilities\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.656381 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-catalog-content\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.656890 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-utilities\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.674886 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6htfs\" (UniqueName: \"kubernetes.io/projected/3abb2de8-01ab-4808-86ec-35fba75a4cfc-kube-api-access-6htfs\") pod \"redhat-marketplace-bt9c8\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.738735 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.758009 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gpccx\" (UniqueName: \"kubernetes.io/projected/64bc492d-434f-42f3-a917-bf9ca5ff8e78-kube-api-access-gpccx\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.758251 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64bc492d-434f-42f3-a917-bf9ca5ff8e78-catalog-content\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.758308 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64bc492d-434f-42f3-a917-bf9ca5ff8e78-utilities\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.758843 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64bc492d-434f-42f3-a917-bf9ca5ff8e78-catalog-content\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.758861 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64bc492d-434f-42f3-a917-bf9ca5ff8e78-utilities\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.788041 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpccx\" (UniqueName: \"kubernetes.io/projected/64bc492d-434f-42f3-a917-bf9ca5ff8e78-kube-api-access-gpccx\") pod \"redhat-operators-4ntm8\" (UID: \"64bc492d-434f-42f3-a917-bf9ca5ff8e78\") " pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.912973 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:41 crc kubenswrapper[5099]: I1212 15:30:41.973679 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt9c8"] Dec 12 15:30:41 crc kubenswrapper[5099]: W1212 15:30:41.990513 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3abb2de8_01ab_4808_86ec_35fba75a4cfc.slice/crio-faf54b301098c3dd73795588583561ba18124f451181b8fa06b05b108940a75e WatchSource:0}: Error finding container faf54b301098c3dd73795588583561ba18124f451181b8fa06b05b108940a75e: Status 404 returned error can't find the container with id faf54b301098c3dd73795588583561ba18124f451181b8fa06b05b108940a75e Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.359007 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ntm8"] Dec 12 15:30:42 crc kubenswrapper[5099]: W1212 15:30:42.365817 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64bc492d_434f_42f3_a917_bf9ca5ff8e78.slice/crio-1d9ea872a49e7536b8a2f30028c532f5cbbb072f2767188a272c8965b6e0ed76 WatchSource:0}: Error finding container 1d9ea872a49e7536b8a2f30028c532f5cbbb072f2767188a272c8965b6e0ed76: Status 404 returned error can't find the container with id 1d9ea872a49e7536b8a2f30028c532f5cbbb072f2767188a272c8965b6e0ed76 Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.484316 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec8ac77-8f80-4a46-b769-37952a91485c" path="/var/lib/kubelet/pods/0ec8ac77-8f80-4a46-b769-37952a91485c/volumes" Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.485340 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50cbce4d-a234-4a2e-b683-8ecf21d93474" path="/var/lib/kubelet/pods/50cbce4d-a234-4a2e-b683-8ecf21d93474/volumes" Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.486057 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7becc184-0a0c-4a25-919f-6359f1da964e" path="/var/lib/kubelet/pods/7becc184-0a0c-4a25-919f-6359f1da964e/volumes" Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.486705 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91162a66-bdaa-4786-ad25-bde12241ebae" path="/var/lib/kubelet/pods/91162a66-bdaa-4786-ad25-bde12241ebae/volumes" Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.487468 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d" path="/var/lib/kubelet/pods/dfe50a69-3d5e-4a45-a4ac-2cd6a12c663d/volumes" Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.502110 5099 generic.go:358] "Generic (PLEG): container finished" podID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerID="10eff7e520e0b1e16af33e0113be6ba800c1a5176c94551664e760504d333998" exitCode=0 Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.502181 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt9c8" event={"ID":"3abb2de8-01ab-4808-86ec-35fba75a4cfc","Type":"ContainerDied","Data":"10eff7e520e0b1e16af33e0113be6ba800c1a5176c94551664e760504d333998"} Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.502768 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt9c8" event={"ID":"3abb2de8-01ab-4808-86ec-35fba75a4cfc","Type":"ContainerStarted","Data":"faf54b301098c3dd73795588583561ba18124f451181b8fa06b05b108940a75e"} Dec 12 15:30:42 crc kubenswrapper[5099]: I1212 15:30:42.509614 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ntm8" event={"ID":"64bc492d-434f-42f3-a917-bf9ca5ff8e78","Type":"ContainerStarted","Data":"1d9ea872a49e7536b8a2f30028c532f5cbbb072f2767188a272c8965b6e0ed76"} Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.519076 5099 generic.go:358] "Generic (PLEG): container finished" podID="64bc492d-434f-42f3-a917-bf9ca5ff8e78" containerID="f19ae5cc5b8553a8b64d051f7eac14273239973596da51accfda05b62382de51" exitCode=0 Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.519268 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ntm8" event={"ID":"64bc492d-434f-42f3-a917-bf9ca5ff8e78","Type":"ContainerDied","Data":"f19ae5cc5b8553a8b64d051f7eac14273239973596da51accfda05b62382de51"} Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.523631 5099 generic.go:358] "Generic (PLEG): container finished" podID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerID="8abb74e1c6a7277ffc6612793490538bc2470c53d9cf1cf0357d84a34abaf76f" exitCode=0 Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.523801 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt9c8" event={"ID":"3abb2de8-01ab-4808-86ec-35fba75a4cfc","Type":"ContainerDied","Data":"8abb74e1c6a7277ffc6612793490538bc2470c53d9cf1cf0357d84a34abaf76f"} Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.872314 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2sh85"] Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.881898 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.885339 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.890483 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sh85"] Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.990526 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjsh\" (UniqueName: \"kubernetes.io/projected/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-kube-api-access-whjsh\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.990605 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-catalog-content\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.990647 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-utilities\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:43 crc kubenswrapper[5099]: I1212 15:30:43.991424 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wdblm"] Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.048123 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdblm"] Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.048291 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.053342 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.091733 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-utilities\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.091794 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-utilities\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.091848 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-catalog-content\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.091873 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndtl\" (UniqueName: \"kubernetes.io/projected/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-kube-api-access-kndtl\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.091984 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whjsh\" (UniqueName: \"kubernetes.io/projected/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-kube-api-access-whjsh\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.092076 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-catalog-content\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.092426 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-utilities\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.092598 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-catalog-content\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.114194 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjsh\" (UniqueName: \"kubernetes.io/projected/b239ff5f-dd4c-4190-98ef-0f19b3d4c440-kube-api-access-whjsh\") pod \"community-operators-2sh85\" (UID: \"b239ff5f-dd4c-4190-98ef-0f19b3d4c440\") " pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.193683 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-catalog-content\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.193749 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kndtl\" (UniqueName: \"kubernetes.io/projected/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-kube-api-access-kndtl\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.193861 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-utilities\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.194264 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-catalog-content\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.194338 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-utilities\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.199535 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.212922 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kndtl\" (UniqueName: \"kubernetes.io/projected/50c57c19-1624-4e17-aaa4-7d81f8d8fd18-kube-api-access-kndtl\") pod \"certified-operators-wdblm\" (UID: \"50c57c19-1624-4e17-aaa4-7d81f8d8fd18\") " pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.362871 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.397584 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sh85"] Dec 12 15:30:44 crc kubenswrapper[5099]: W1212 15:30:44.405156 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb239ff5f_dd4c_4190_98ef_0f19b3d4c440.slice/crio-7aa65622f8663e2eac32b629bbd3bb659f7f9fe55c4dcdd23890fe058396b3d6 WatchSource:0}: Error finding container 7aa65622f8663e2eac32b629bbd3bb659f7f9fe55c4dcdd23890fe058396b3d6: Status 404 returned error can't find the container with id 7aa65622f8663e2eac32b629bbd3bb659f7f9fe55c4dcdd23890fe058396b3d6 Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.535828 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt9c8" event={"ID":"3abb2de8-01ab-4808-86ec-35fba75a4cfc","Type":"ContainerStarted","Data":"28b211f9836bf827c42dcd09babfaf82432a705ed4f2ebd0bcb014198d559459"} Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.537183 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sh85" event={"ID":"b239ff5f-dd4c-4190-98ef-0f19b3d4c440","Type":"ContainerStarted","Data":"7aa65622f8663e2eac32b629bbd3bb659f7f9fe55c4dcdd23890fe058396b3d6"} Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.590320 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdblm"] Dec 12 15:30:44 crc kubenswrapper[5099]: I1212 15:30:44.601899 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bt9c8" podStartSLOduration=2.939824574 podStartE2EDuration="3.601881943s" podCreationTimestamp="2025-12-12 15:30:41 +0000 UTC" firstStartedPulling="2025-12-12 15:30:42.503205719 +0000 UTC m=+580.607114360" lastFinishedPulling="2025-12-12 15:30:43.165263088 +0000 UTC m=+581.269171729" observedRunningTime="2025-12-12 15:30:44.597731645 +0000 UTC m=+582.701640276" watchObservedRunningTime="2025-12-12 15:30:44.601881943 +0000 UTC m=+582.705790584" Dec 12 15:30:44 crc kubenswrapper[5099]: W1212 15:30:44.606871 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50c57c19_1624_4e17_aaa4_7d81f8d8fd18.slice/crio-e336344067ed484af2c503d1f8c3d6779a33f3b53f6374a071e14c2ca34d517f WatchSource:0}: Error finding container e336344067ed484af2c503d1f8c3d6779a33f3b53f6374a071e14c2ca34d517f: Status 404 returned error can't find the container with id e336344067ed484af2c503d1f8c3d6779a33f3b53f6374a071e14c2ca34d517f Dec 12 15:30:45 crc kubenswrapper[5099]: I1212 15:30:45.545684 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ntm8" event={"ID":"64bc492d-434f-42f3-a917-bf9ca5ff8e78","Type":"ContainerStarted","Data":"f3dee05d4f2b83aafe26228cf7ed68f6f6cc612de79ea2200c8d16d31a0c4126"} Dec 12 15:30:45 crc kubenswrapper[5099]: I1212 15:30:45.548392 5099 generic.go:358] "Generic (PLEG): container finished" podID="b239ff5f-dd4c-4190-98ef-0f19b3d4c440" containerID="bc3a2f25fd0be3b41c04feaee977718e5d8117fbc4c5a225806ce13942d31416" exitCode=0 Dec 12 15:30:45 crc kubenswrapper[5099]: I1212 15:30:45.548516 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sh85" event={"ID":"b239ff5f-dd4c-4190-98ef-0f19b3d4c440","Type":"ContainerDied","Data":"bc3a2f25fd0be3b41c04feaee977718e5d8117fbc4c5a225806ce13942d31416"} Dec 12 15:30:45 crc kubenswrapper[5099]: I1212 15:30:45.552346 5099 generic.go:358] "Generic (PLEG): container finished" podID="50c57c19-1624-4e17-aaa4-7d81f8d8fd18" containerID="e08503594b0fd8a9f20c793ff51565dfa73e7df921a49b8d6944a11a5ba1f587" exitCode=0 Dec 12 15:30:45 crc kubenswrapper[5099]: I1212 15:30:45.553201 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdblm" event={"ID":"50c57c19-1624-4e17-aaa4-7d81f8d8fd18","Type":"ContainerDied","Data":"e08503594b0fd8a9f20c793ff51565dfa73e7df921a49b8d6944a11a5ba1f587"} Dec 12 15:30:45 crc kubenswrapper[5099]: I1212 15:30:45.553284 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdblm" event={"ID":"50c57c19-1624-4e17-aaa4-7d81f8d8fd18","Type":"ContainerStarted","Data":"e336344067ed484af2c503d1f8c3d6779a33f3b53f6374a071e14c2ca34d517f"} Dec 12 15:30:46 crc kubenswrapper[5099]: I1212 15:30:46.560801 5099 generic.go:358] "Generic (PLEG): container finished" podID="64bc492d-434f-42f3-a917-bf9ca5ff8e78" containerID="f3dee05d4f2b83aafe26228cf7ed68f6f6cc612de79ea2200c8d16d31a0c4126" exitCode=0 Dec 12 15:30:46 crc kubenswrapper[5099]: I1212 15:30:46.560909 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ntm8" event={"ID":"64bc492d-434f-42f3-a917-bf9ca5ff8e78","Type":"ContainerDied","Data":"f3dee05d4f2b83aafe26228cf7ed68f6f6cc612de79ea2200c8d16d31a0c4126"} Dec 12 15:30:46 crc kubenswrapper[5099]: I1212 15:30:46.563694 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sh85" event={"ID":"b239ff5f-dd4c-4190-98ef-0f19b3d4c440","Type":"ContainerStarted","Data":"1c4b9dffa1d8ccaba55b100715dc5756b2653d3bfa268e8b7666f4e5100db6d4"} Dec 12 15:30:46 crc kubenswrapper[5099]: I1212 15:30:46.567966 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdblm" event={"ID":"50c57c19-1624-4e17-aaa4-7d81f8d8fd18","Type":"ContainerStarted","Data":"b163a46a4d21aac36ea2fdac67757382339b09275a82c3f12b0a63b775479b1d"} Dec 12 15:30:47 crc kubenswrapper[5099]: I1212 15:30:47.576771 5099 generic.go:358] "Generic (PLEG): container finished" podID="b239ff5f-dd4c-4190-98ef-0f19b3d4c440" containerID="1c4b9dffa1d8ccaba55b100715dc5756b2653d3bfa268e8b7666f4e5100db6d4" exitCode=0 Dec 12 15:30:47 crc kubenswrapper[5099]: I1212 15:30:47.576831 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sh85" event={"ID":"b239ff5f-dd4c-4190-98ef-0f19b3d4c440","Type":"ContainerDied","Data":"1c4b9dffa1d8ccaba55b100715dc5756b2653d3bfa268e8b7666f4e5100db6d4"} Dec 12 15:30:47 crc kubenswrapper[5099]: I1212 15:30:47.580830 5099 generic.go:358] "Generic (PLEG): container finished" podID="50c57c19-1624-4e17-aaa4-7d81f8d8fd18" containerID="b163a46a4d21aac36ea2fdac67757382339b09275a82c3f12b0a63b775479b1d" exitCode=0 Dec 12 15:30:47 crc kubenswrapper[5099]: I1212 15:30:47.580978 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdblm" event={"ID":"50c57c19-1624-4e17-aaa4-7d81f8d8fd18","Type":"ContainerDied","Data":"b163a46a4d21aac36ea2fdac67757382339b09275a82c3f12b0a63b775479b1d"} Dec 12 15:30:47 crc kubenswrapper[5099]: I1212 15:30:47.585911 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ntm8" event={"ID":"64bc492d-434f-42f3-a917-bf9ca5ff8e78","Type":"ContainerStarted","Data":"ee1deaf148f787daffd879aad422b33c792eecc53ae84b1155e7945601ed4619"} Dec 12 15:30:48 crc kubenswrapper[5099]: I1212 15:30:48.594418 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sh85" event={"ID":"b239ff5f-dd4c-4190-98ef-0f19b3d4c440","Type":"ContainerStarted","Data":"2d23e116c843eb30c531243fac2927d624b35671b5fa49acf53b1e91d94c07d4"} Dec 12 15:30:48 crc kubenswrapper[5099]: I1212 15:30:48.597469 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdblm" event={"ID":"50c57c19-1624-4e17-aaa4-7d81f8d8fd18","Type":"ContainerStarted","Data":"9c13684961d4c5d2cda643747c2f44e13bca5790f8099299dd4ed86ecb8e19ce"} Dec 12 15:30:48 crc kubenswrapper[5099]: I1212 15:30:48.621775 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ntm8" podStartSLOduration=5.886204528 podStartE2EDuration="7.621755302s" podCreationTimestamp="2025-12-12 15:30:41 +0000 UTC" firstStartedPulling="2025-12-12 15:30:43.520159364 +0000 UTC m=+581.624068005" lastFinishedPulling="2025-12-12 15:30:45.255710138 +0000 UTC m=+583.359618779" observedRunningTime="2025-12-12 15:30:47.830230376 +0000 UTC m=+585.934139037" watchObservedRunningTime="2025-12-12 15:30:48.621755302 +0000 UTC m=+586.725663953" Dec 12 15:30:48 crc kubenswrapper[5099]: I1212 15:30:48.646392 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2sh85" podStartSLOduration=4.89692642 podStartE2EDuration="5.646375458s" podCreationTimestamp="2025-12-12 15:30:43 +0000 UTC" firstStartedPulling="2025-12-12 15:30:45.549398512 +0000 UTC m=+583.653307153" lastFinishedPulling="2025-12-12 15:30:46.29884753 +0000 UTC m=+584.402756191" observedRunningTime="2025-12-12 15:30:48.623257191 +0000 UTC m=+586.727165842" watchObservedRunningTime="2025-12-12 15:30:48.646375458 +0000 UTC m=+586.750284099" Dec 12 15:30:51 crc kubenswrapper[5099]: I1212 15:30:51.739049 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:51 crc kubenswrapper[5099]: I1212 15:30:51.739692 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:51 crc kubenswrapper[5099]: I1212 15:30:51.796874 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:51 crc kubenswrapper[5099]: I1212 15:30:51.825426 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wdblm" podStartSLOduration=8.121407891 podStartE2EDuration="8.825400474s" podCreationTimestamp="2025-12-12 15:30:43 +0000 UTC" firstStartedPulling="2025-12-12 15:30:45.55395347 +0000 UTC m=+583.657862111" lastFinishedPulling="2025-12-12 15:30:46.257946053 +0000 UTC m=+584.361854694" observedRunningTime="2025-12-12 15:30:48.648890663 +0000 UTC m=+586.752799304" watchObservedRunningTime="2025-12-12 15:30:51.825400474 +0000 UTC m=+589.929309135" Dec 12 15:30:51 crc kubenswrapper[5099]: I1212 15:30:51.914701 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:51 crc kubenswrapper[5099]: I1212 15:30:51.914776 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:30:52 crc kubenswrapper[5099]: I1212 15:30:52.034512 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:30:53 crc kubenswrapper[5099]: I1212 15:30:53.014623 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4ntm8" podUID="64bc492d-434f-42f3-a917-bf9ca5ff8e78" containerName="registry-server" probeResult="failure" output=< Dec 12 15:30:53 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Dec 12 15:30:53 crc kubenswrapper[5099]: > Dec 12 15:30:54 crc kubenswrapper[5099]: I1212 15:30:54.199725 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:54 crc kubenswrapper[5099]: I1212 15:30:54.199805 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:54 crc kubenswrapper[5099]: I1212 15:30:54.242886 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:54 crc kubenswrapper[5099]: I1212 15:30:54.363461 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:54 crc kubenswrapper[5099]: I1212 15:30:54.363798 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:54 crc kubenswrapper[5099]: I1212 15:30:54.400840 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:30:55 crc kubenswrapper[5099]: I1212 15:30:55.081910 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2sh85" Dec 12 15:30:55 crc kubenswrapper[5099]: I1212 15:30:55.085455 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wdblm" Dec 12 15:31:01 crc kubenswrapper[5099]: I1212 15:31:01.953410 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:31:02 crc kubenswrapper[5099]: I1212 15:31:02.006435 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ntm8" Dec 12 15:31:02 crc kubenswrapper[5099]: I1212 15:31:02.695858 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:31:02 crc kubenswrapper[5099]: I1212 15:31:02.701999 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:31:02 crc kubenswrapper[5099]: I1212 15:31:02.702579 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:31:02 crc kubenswrapper[5099]: I1212 15:31:02.707607 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:31:16 crc kubenswrapper[5099]: I1212 15:31:16.515174 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:31:16 crc kubenswrapper[5099]: I1212 15:31:16.515829 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:31:46 crc kubenswrapper[5099]: I1212 15:31:46.571217 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:31:46 crc kubenswrapper[5099]: I1212 15:31:46.574162 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:32:16 crc kubenswrapper[5099]: I1212 15:32:16.515625 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:32:16 crc kubenswrapper[5099]: I1212 15:32:16.516307 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:32:16 crc kubenswrapper[5099]: I1212 15:32:16.516364 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:32:16 crc kubenswrapper[5099]: I1212 15:32:16.517003 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4f55d53d74f97d2f7e29a4b2345a51f4b33eca00c342b37de40618eff52b12a"} pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:32:16 crc kubenswrapper[5099]: I1212 15:32:16.517073 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" containerID="cri-o://a4f55d53d74f97d2f7e29a4b2345a51f4b33eca00c342b37de40618eff52b12a" gracePeriod=600 Dec 12 15:32:16 crc kubenswrapper[5099]: I1212 15:32:16.644852 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:32:17 crc kubenswrapper[5099]: I1212 15:32:17.643429 5099 generic.go:358] "Generic (PLEG): container finished" podID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerID="a4f55d53d74f97d2f7e29a4b2345a51f4b33eca00c342b37de40618eff52b12a" exitCode=0 Dec 12 15:32:17 crc kubenswrapper[5099]: I1212 15:32:17.643481 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerDied","Data":"a4f55d53d74f97d2f7e29a4b2345a51f4b33eca00c342b37de40618eff52b12a"} Dec 12 15:32:17 crc kubenswrapper[5099]: I1212 15:32:17.644188 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"fca3695db7f4ff2571fcdc7f89f68e183f2fb74b803aed241480609969af109b"} Dec 12 15:32:17 crc kubenswrapper[5099]: I1212 15:32:17.644244 5099 scope.go:117] "RemoveContainer" containerID="0fdeb729c4f065f4c9cb140219a15931967f589a6d5c6c791404fed72f77f20b" Dec 12 15:34:16 crc kubenswrapper[5099]: I1212 15:34:16.515998 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:34:16 crc kubenswrapper[5099]: I1212 15:34:16.516626 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.086950 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpqqq"] Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.213681 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpqqq"] Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.213830 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.265345 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-catalog-content\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.265406 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvqk\" (UniqueName: \"kubernetes.io/projected/33a78127-76eb-4fd4-83e6-a7c4688abdab-kube-api-access-8bvqk\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.265616 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-utilities\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.367443 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-catalog-content\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.367503 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvqk\" (UniqueName: \"kubernetes.io/projected/33a78127-76eb-4fd4-83e6-a7c4688abdab-kube-api-access-8bvqk\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.367534 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-utilities\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.368612 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-utilities\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.368702 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-catalog-content\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.389925 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvqk\" (UniqueName: \"kubernetes.io/projected/33a78127-76eb-4fd4-83e6-a7c4688abdab-kube-api-access-8bvqk\") pod \"community-operators-bpqqq\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.536758 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.793860 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpqqq"] Dec 12 15:34:36 crc kubenswrapper[5099]: I1212 15:34:36.991638 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpqqq" event={"ID":"33a78127-76eb-4fd4-83e6-a7c4688abdab","Type":"ContainerStarted","Data":"34563cefc89283e799066f5ad10baeec0d2f60aa19ac04ce4ab1a8daaa62f482"} Dec 12 15:34:39 crc kubenswrapper[5099]: I1212 15:34:39.056886 5099 generic.go:358] "Generic (PLEG): container finished" podID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerID="de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0" exitCode=0 Dec 12 15:34:39 crc kubenswrapper[5099]: I1212 15:34:39.057025 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpqqq" event={"ID":"33a78127-76eb-4fd4-83e6-a7c4688abdab","Type":"ContainerDied","Data":"de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0"} Dec 12 15:34:41 crc kubenswrapper[5099]: I1212 15:34:41.072526 5099 generic.go:358] "Generic (PLEG): container finished" podID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerID="5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888" exitCode=0 Dec 12 15:34:41 crc kubenswrapper[5099]: I1212 15:34:41.072851 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpqqq" event={"ID":"33a78127-76eb-4fd4-83e6-a7c4688abdab","Type":"ContainerDied","Data":"5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888"} Dec 12 15:34:42 crc kubenswrapper[5099]: I1212 15:34:42.082538 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpqqq" event={"ID":"33a78127-76eb-4fd4-83e6-a7c4688abdab","Type":"ContainerStarted","Data":"5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5"} Dec 12 15:34:42 crc kubenswrapper[5099]: I1212 15:34:42.109388 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpqqq" podStartSLOduration=4.8835667019999995 podStartE2EDuration="6.109352956s" podCreationTimestamp="2025-12-12 15:34:36 +0000 UTC" firstStartedPulling="2025-12-12 15:34:39.058371499 +0000 UTC m=+817.162280180" lastFinishedPulling="2025-12-12 15:34:40.284157793 +0000 UTC m=+818.388066434" observedRunningTime="2025-12-12 15:34:42.103157137 +0000 UTC m=+820.207065788" watchObservedRunningTime="2025-12-12 15:34:42.109352956 +0000 UTC m=+820.213261597" Dec 12 15:34:46 crc kubenswrapper[5099]: I1212 15:34:46.516342 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:34:46 crc kubenswrapper[5099]: I1212 15:34:46.516807 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:34:46 crc kubenswrapper[5099]: I1212 15:34:46.537442 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:46 crc kubenswrapper[5099]: I1212 15:34:46.537532 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:46 crc kubenswrapper[5099]: I1212 15:34:46.577916 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:47 crc kubenswrapper[5099]: I1212 15:34:47.183673 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:47 crc kubenswrapper[5099]: I1212 15:34:47.228229 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpqqq"] Dec 12 15:34:49 crc kubenswrapper[5099]: I1212 15:34:49.128834 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpqqq" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerName="registry-server" containerID="cri-o://5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5" gracePeriod=2 Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.001162 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.102867 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-catalog-content\") pod \"33a78127-76eb-4fd4-83e6-a7c4688abdab\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.102911 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-utilities\") pod \"33a78127-76eb-4fd4-83e6-a7c4688abdab\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.102999 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bvqk\" (UniqueName: \"kubernetes.io/projected/33a78127-76eb-4fd4-83e6-a7c4688abdab-kube-api-access-8bvqk\") pod \"33a78127-76eb-4fd4-83e6-a7c4688abdab\" (UID: \"33a78127-76eb-4fd4-83e6-a7c4688abdab\") " Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.104403 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-utilities" (OuterVolumeSpecName: "utilities") pod "33a78127-76eb-4fd4-83e6-a7c4688abdab" (UID: "33a78127-76eb-4fd4-83e6-a7c4688abdab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.110561 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a78127-76eb-4fd4-83e6-a7c4688abdab-kube-api-access-8bvqk" (OuterVolumeSpecName: "kube-api-access-8bvqk") pod "33a78127-76eb-4fd4-83e6-a7c4688abdab" (UID: "33a78127-76eb-4fd4-83e6-a7c4688abdab"). InnerVolumeSpecName "kube-api-access-8bvqk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.136211 5099 generic.go:358] "Generic (PLEG): container finished" podID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerID="5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5" exitCode=0 Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.136291 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpqqq" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.136391 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpqqq" event={"ID":"33a78127-76eb-4fd4-83e6-a7c4688abdab","Type":"ContainerDied","Data":"5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5"} Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.136419 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpqqq" event={"ID":"33a78127-76eb-4fd4-83e6-a7c4688abdab","Type":"ContainerDied","Data":"34563cefc89283e799066f5ad10baeec0d2f60aa19ac04ce4ab1a8daaa62f482"} Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.136447 5099 scope.go:117] "RemoveContainer" containerID="5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.155323 5099 scope.go:117] "RemoveContainer" containerID="5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.158602 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33a78127-76eb-4fd4-83e6-a7c4688abdab" (UID: "33a78127-76eb-4fd4-83e6-a7c4688abdab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.173028 5099 scope.go:117] "RemoveContainer" containerID="de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.194311 5099 scope.go:117] "RemoveContainer" containerID="5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5" Dec 12 15:34:50 crc kubenswrapper[5099]: E1212 15:34:50.195093 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5\": container with ID starting with 5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5 not found: ID does not exist" containerID="5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.195163 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5"} err="failed to get container status \"5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5\": rpc error: code = NotFound desc = could not find container \"5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5\": container with ID starting with 5b3ad41a2c2f0a403f7946128b44d492bea9423d8c0c49779b9659856f99d9b5 not found: ID does not exist" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.195185 5099 scope.go:117] "RemoveContainer" containerID="5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888" Dec 12 15:34:50 crc kubenswrapper[5099]: E1212 15:34:50.195450 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888\": container with ID starting with 5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888 not found: ID does not exist" containerID="5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.195477 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888"} err="failed to get container status \"5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888\": rpc error: code = NotFound desc = could not find container \"5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888\": container with ID starting with 5729d287dc5dfe2f90ffbcf438c0a2bc2e352be97f1c24ad70731ebbad372888 not found: ID does not exist" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.195491 5099 scope.go:117] "RemoveContainer" containerID="de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0" Dec 12 15:34:50 crc kubenswrapper[5099]: E1212 15:34:50.195790 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0\": container with ID starting with de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0 not found: ID does not exist" containerID="de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.195814 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0"} err="failed to get container status \"de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0\": rpc error: code = NotFound desc = could not find container \"de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0\": container with ID starting with de564cf356edfc002e4aafa5bef4a6891807efc89887a5d0cee5f7a1d9b7f1a0 not found: ID does not exist" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.204154 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8bvqk\" (UniqueName: \"kubernetes.io/projected/33a78127-76eb-4fd4-83e6-a7c4688abdab-kube-api-access-8bvqk\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.204196 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.204208 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a78127-76eb-4fd4-83e6-a7c4688abdab-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.481706 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpqqq"] Dec 12 15:34:50 crc kubenswrapper[5099]: I1212 15:34:50.484566 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpqqq"] Dec 12 15:34:52 crc kubenswrapper[5099]: I1212 15:34:52.538699 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" path="/var/lib/kubelet/pods/33a78127-76eb-4fd4-83e6-a7c4688abdab/volumes" Dec 12 15:35:16 crc kubenswrapper[5099]: I1212 15:35:16.515399 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:35:16 crc kubenswrapper[5099]: I1212 15:35:16.516813 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:35:16 crc kubenswrapper[5099]: I1212 15:35:16.516887 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:35:16 crc kubenswrapper[5099]: I1212 15:35:16.517533 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fca3695db7f4ff2571fcdc7f89f68e183f2fb74b803aed241480609969af109b"} pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:35:16 crc kubenswrapper[5099]: I1212 15:35:16.517620 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" containerID="cri-o://fca3695db7f4ff2571fcdc7f89f68e183f2fb74b803aed241480609969af109b" gracePeriod=600 Dec 12 15:35:17 crc kubenswrapper[5099]: I1212 15:35:17.291348 5099 generic.go:358] "Generic (PLEG): container finished" podID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerID="fca3695db7f4ff2571fcdc7f89f68e183f2fb74b803aed241480609969af109b" exitCode=0 Dec 12 15:35:17 crc kubenswrapper[5099]: I1212 15:35:17.291448 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerDied","Data":"fca3695db7f4ff2571fcdc7f89f68e183f2fb74b803aed241480609969af109b"} Dec 12 15:35:17 crc kubenswrapper[5099]: I1212 15:35:17.292014 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"3bc47be49afe36eec207faef72696c0e0a3816019790ed023f832a2c46d18ea7"} Dec 12 15:35:17 crc kubenswrapper[5099]: I1212 15:35:17.292060 5099 scope.go:117] "RemoveContainer" containerID="a4f55d53d74f97d2f7e29a4b2345a51f4b33eca00c342b37de40618eff52b12a" Dec 12 15:36:02 crc kubenswrapper[5099]: I1212 15:36:02.772623 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:36:02 crc kubenswrapper[5099]: I1212 15:36:02.773273 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:36:02 crc kubenswrapper[5099]: I1212 15:36:02.779993 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:36:02 crc kubenswrapper[5099]: I1212 15:36:02.780139 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.946126 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vzk99"] Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.947448 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerName="extract-utilities" Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.947474 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerName="extract-utilities" Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.947503 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerName="registry-server" Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.947510 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerName="registry-server" Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.947528 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerName="extract-content" Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.947536 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerName="extract-content" Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.947722 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="33a78127-76eb-4fd4-83e6-a7c4688abdab" containerName="registry-server" Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.971941 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzk99"] Dec 12 15:37:10 crc kubenswrapper[5099]: I1212 15:37:10.972122 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.090769 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-catalog-content\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.090857 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4n5\" (UniqueName: \"kubernetes.io/projected/d78a5551-18ac-4da9-bfca-841e1b08af84-kube-api-access-hv4n5\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.090891 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-utilities\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.192635 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-catalog-content\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.192790 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hv4n5\" (UniqueName: \"kubernetes.io/projected/d78a5551-18ac-4da9-bfca-841e1b08af84-kube-api-access-hv4n5\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.192852 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-utilities\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.193913 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-utilities\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.194028 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-catalog-content\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.230180 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv4n5\" (UniqueName: \"kubernetes.io/projected/d78a5551-18ac-4da9-bfca-841e1b08af84-kube-api-access-hv4n5\") pod \"certified-operators-vzk99\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.290931 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:11 crc kubenswrapper[5099]: I1212 15:37:11.543180 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzk99"] Dec 12 15:37:12 crc kubenswrapper[5099]: I1212 15:37:12.521506 5099 generic.go:358] "Generic (PLEG): container finished" podID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerID="3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760" exitCode=0 Dec 12 15:37:12 crc kubenswrapper[5099]: I1212 15:37:12.521701 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzk99" event={"ID":"d78a5551-18ac-4da9-bfca-841e1b08af84","Type":"ContainerDied","Data":"3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760"} Dec 12 15:37:12 crc kubenswrapper[5099]: I1212 15:37:12.521749 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzk99" event={"ID":"d78a5551-18ac-4da9-bfca-841e1b08af84","Type":"ContainerStarted","Data":"df9fa4c2278550ba6c1ba1f741b74a6961031fb687e07d21fee7259438169772"} Dec 12 15:37:14 crc kubenswrapper[5099]: I1212 15:37:14.542632 5099 generic.go:358] "Generic (PLEG): container finished" podID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerID="fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49" exitCode=0 Dec 12 15:37:14 crc kubenswrapper[5099]: I1212 15:37:14.542720 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzk99" event={"ID":"d78a5551-18ac-4da9-bfca-841e1b08af84","Type":"ContainerDied","Data":"fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49"} Dec 12 15:37:15 crc kubenswrapper[5099]: I1212 15:37:15.552843 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzk99" event={"ID":"d78a5551-18ac-4da9-bfca-841e1b08af84","Type":"ContainerStarted","Data":"c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5"} Dec 12 15:37:15 crc kubenswrapper[5099]: I1212 15:37:15.574372 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vzk99" podStartSLOduration=4.632901167 podStartE2EDuration="5.574336771s" podCreationTimestamp="2025-12-12 15:37:10 +0000 UTC" firstStartedPulling="2025-12-12 15:37:12.52280853 +0000 UTC m=+970.626717171" lastFinishedPulling="2025-12-12 15:37:13.464244134 +0000 UTC m=+971.568152775" observedRunningTime="2025-12-12 15:37:15.572593036 +0000 UTC m=+973.676501677" watchObservedRunningTime="2025-12-12 15:37:15.574336771 +0000 UTC m=+973.678245412" Dec 12 15:37:16 crc kubenswrapper[5099]: I1212 15:37:16.515498 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:37:16 crc kubenswrapper[5099]: I1212 15:37:16.515598 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.256408 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm"] Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.258486 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerName="kube-rbac-proxy" containerID="cri-o://453368b5a00381fd4926645b701f4fd9842e83e204725f69ad50faa4122d2e6f" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.259051 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerName="ovnkube-cluster-manager" containerID="cri-o://9030538b595a5e84a0ed1bdc56f6c85298752b7926790fcd28916206a603082f" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.481819 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5glsp"] Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.482880 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovn-controller" containerID="cri-o://1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.483200 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kube-rbac-proxy-node" containerID="cri-o://0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.483175 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="northd" containerID="cri-o://9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.483399 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovn-acl-logging" containerID="cri-o://2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.483444 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="nbdb" containerID="cri-o://57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.483404 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.483404 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="sbdb" containerID="cri-o://766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" gracePeriod=30 Dec 12 15:37:19 crc kubenswrapper[5099]: I1212 15:37:19.519594 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovnkube-controller" containerID="cri-o://e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" gracePeriod=30 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.028197 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.055424 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-env-overrides\") pod \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.055488 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovnkube-config\") pod \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.055593 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovn-control-plane-metrics-cert\") pod \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.056228 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "052c66d7-f3c6-4f4b-97e0-70e9e533308c" (UID: "052c66d7-f3c6-4f4b-97e0-70e9e533308c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.056412 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "052c66d7-f3c6-4f4b-97e0-70e9e533308c" (UID: "052c66d7-f3c6-4f4b-97e0-70e9e533308c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.056545 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxcm\" (UniqueName: \"kubernetes.io/projected/052c66d7-f3c6-4f4b-97e0-70e9e533308c-kube-api-access-plxcm\") pod \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\" (UID: \"052c66d7-f3c6-4f4b-97e0-70e9e533308c\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.056795 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.056820 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.056872 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82"] Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.057431 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerName="ovnkube-cluster-manager" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.057458 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerName="ovnkube-cluster-manager" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.057478 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerName="kube-rbac-proxy" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.057485 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerName="kube-rbac-proxy" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.057603 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerName="ovnkube-cluster-manager" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.057615 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerName="kube-rbac-proxy" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.062267 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.062486 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "052c66d7-f3c6-4f4b-97e0-70e9e533308c" (UID: "052c66d7-f3c6-4f4b-97e0-70e9e533308c"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.062609 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052c66d7-f3c6-4f4b-97e0-70e9e533308c-kube-api-access-plxcm" (OuterVolumeSpecName: "kube-api-access-plxcm") pod "052c66d7-f3c6-4f4b-97e0-70e9e533308c" (UID: "052c66d7-f3c6-4f4b-97e0-70e9e533308c"). InnerVolumeSpecName "kube-api-access-plxcm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.157653 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2611467e-9640-4cb8-9428-eb0557ff9b00-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.157732 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2611467e-9640-4cb8-9428-eb0557ff9b00-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.157828 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjfj6\" (UniqueName: \"kubernetes.io/projected/2611467e-9640-4cb8-9428-eb0557ff9b00-kube-api-access-pjfj6\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.157867 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2611467e-9640-4cb8-9428-eb0557ff9b00-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.157918 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/052c66d7-f3c6-4f4b-97e0-70e9e533308c-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.157934 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plxcm\" (UniqueName: \"kubernetes.io/projected/052c66d7-f3c6-4f4b-97e0-70e9e533308c-kube-api-access-plxcm\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.171308 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5glsp_0fd18053-827f-48f8-b64b-4cc0035ce4ad/ovn-acl-logging/0.log" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.172686 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5glsp_0fd18053-827f-48f8-b64b-4cc0035ce4ad/ovn-controller/0.log" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.173614 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.239768 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pdxfq"] Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.240911 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="northd" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.240925 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="northd" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.240934 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="nbdb" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.240939 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="nbdb" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.240952 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovn-acl-logging" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.240961 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovn-acl-logging" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.240975 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kubecfg-setup" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.240980 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kubecfg-setup" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241003 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kube-rbac-proxy-node" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241008 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kube-rbac-proxy-node" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241017 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241024 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241032 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovnkube-controller" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241037 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovnkube-controller" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241046 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovn-controller" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241051 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovn-controller" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241060 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="sbdb" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241065 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="sbdb" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241235 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="sbdb" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241248 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="northd" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241257 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kube-rbac-proxy-node" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241264 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovn-acl-logging" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241271 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="nbdb" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241278 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241286 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovn-controller" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.241295 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerName="ovnkube-controller" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.246841 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.259186 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-log-socket\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.259317 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-log-socket" (OuterVolumeSpecName: "log-socket") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.260094 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.259986 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-netns\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.260178 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-ovn-kubernetes\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.260239 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.260306 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-kubelet\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.260377 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.260402 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.260430 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-env-overrides\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.260479 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.261422 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.261741 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-systemd\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.261824 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-netd\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.261860 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-openvswitch\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.261944 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-systemd-units\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.261968 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-slash\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.261991 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-etc-openvswitch\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262069 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-var-lib-openvswitch\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262102 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-node-log\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262141 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262178 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262180 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-config\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262221 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-script-lib\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262263 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-bin\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262302 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovn-node-metrics-cert\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262806 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-ovn\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262841 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22clz\" (UniqueName: \"kubernetes.io/projected/0fd18053-827f-48f8-b64b-4cc0035ce4ad-kube-api-access-22clz\") pod \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\" (UID: \"0fd18053-827f-48f8-b64b-4cc0035ce4ad\") " Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262982 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-cni-netd\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263030 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-cni-bin\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263107 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-systemd\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263348 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2611467e-9640-4cb8-9428-eb0557ff9b00-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262196 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263375 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262212 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262228 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-slash" (OuterVolumeSpecName: "host-slash") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263423 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-var-lib-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263476 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262249 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-node-log" (OuterVolumeSpecName: "node-log") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262262 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262343 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262599 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263538 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7367272d-123c-46b8-bb11-7091c0bac810-ovn-node-metrics-cert\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263604 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-kubelet\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.262759 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263687 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-systemd-units\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263748 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263793 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-ovnkube-config\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263815 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jgw2\" (UniqueName: \"kubernetes.io/projected/7367272d-123c-46b8-bb11-7091c0bac810-kube-api-access-7jgw2\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263853 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-etc-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263928 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2611467e-9640-4cb8-9428-eb0557ff9b00-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.263968 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-run-ovn-kubernetes\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264060 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-run-netns\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264095 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-slash\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264125 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-node-log\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264147 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-ovnkube-script-lib\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264398 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2611467e-9640-4cb8-9428-eb0557ff9b00-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264443 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-ovn\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264477 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-env-overrides\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264531 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-log-socket\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264574 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pjfj6\" (UniqueName: \"kubernetes.io/projected/2611467e-9640-4cb8-9428-eb0557ff9b00-kube-api-access-pjfj6\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264731 5099 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264771 5099 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264793 5099 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264807 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264819 5099 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264830 5099 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264842 5099 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264854 5099 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264865 5099 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264878 5099 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264890 5099 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264902 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264914 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264912 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2611467e-9640-4cb8-9428-eb0557ff9b00-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264927 5099 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.264986 5099 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.265001 5099 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.265015 5099 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.265635 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2611467e-9640-4cb8-9428-eb0557ff9b00-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.267511 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.268734 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2611467e-9640-4cb8-9428-eb0557ff9b00-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.271243 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd18053-827f-48f8-b64b-4cc0035ce4ad-kube-api-access-22clz" (OuterVolumeSpecName: "kube-api-access-22clz") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "kube-api-access-22clz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.279000 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0fd18053-827f-48f8-b64b-4cc0035ce4ad" (UID: "0fd18053-827f-48f8-b64b-4cc0035ce4ad"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.285493 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjfj6\" (UniqueName: \"kubernetes.io/projected/2611467e-9640-4cb8-9428-eb0557ff9b00-kube-api-access-pjfj6\") pod \"ovnkube-control-plane-97c9b6c48-6lz82\" (UID: \"2611467e-9640-4cb8-9428-eb0557ff9b00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366092 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366167 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-var-lib-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366186 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7367272d-123c-46b8-bb11-7091c0bac810-ovn-node-metrics-cert\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366258 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-var-lib-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366298 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366331 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-kubelet\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366385 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-kubelet\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366452 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-systemd-units\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366486 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-systemd-units\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366512 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366585 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366613 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-ovnkube-config\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366635 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7jgw2\" (UniqueName: \"kubernetes.io/projected/7367272d-123c-46b8-bb11-7091c0bac810-kube-api-access-7jgw2\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366735 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-etc-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.366968 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-etc-openvswitch\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367114 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-run-ovn-kubernetes\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367155 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-run-ovn-kubernetes\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367239 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-run-netns\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367327 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-slash\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367373 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-run-netns\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367406 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-node-log\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367424 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-ovnkube-script-lib\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367429 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-ovnkube-config\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367466 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-slash\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367535 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-node-log\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367579 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-ovn\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367647 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-ovn\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368073 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-ovnkube-script-lib\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368184 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-env-overrides\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.367607 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7367272d-123c-46b8-bb11-7091c0bac810-env-overrides\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368275 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-log-socket\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368371 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-log-socket\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368413 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-cni-netd\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368461 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-cni-netd\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368470 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-cni-bin\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368496 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-systemd\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368541 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-host-cni-bin\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368594 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fd18053-827f-48f8-b64b-4cc0035ce4ad-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368626 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7367272d-123c-46b8-bb11-7091c0bac810-run-systemd\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368646 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-22clz\" (UniqueName: \"kubernetes.io/projected/0fd18053-827f-48f8-b64b-4cc0035ce4ad-kube-api-access-22clz\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.368656 5099 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fd18053-827f-48f8-b64b-4cc0035ce4ad-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.370494 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7367272d-123c-46b8-bb11-7091c0bac810-ovn-node-metrics-cert\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.382538 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.386500 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jgw2\" (UniqueName: \"kubernetes.io/projected/7367272d-123c-46b8-bb11-7091c0bac810-kube-api-access-7jgw2\") pod \"ovnkube-node-pdxfq\" (UID: \"7367272d-123c-46b8-bb11-7091c0bac810\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.406490 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.568949 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.606724 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5glsp_0fd18053-827f-48f8-b64b-4cc0035ce4ad/ovn-acl-logging/0.log" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.607176 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5glsp_0fd18053-827f-48f8-b64b-4cc0035ce4ad/ovn-controller/0.log" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608012 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" exitCode=0 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608052 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" exitCode=0 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608066 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" exitCode=0 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608076 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" exitCode=0 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608085 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" exitCode=0 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608093 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" exitCode=0 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608101 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b" exitCode=143 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608082 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608166 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608179 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608218 5099 scope.go:117] "RemoveContainer" containerID="e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608198 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608291 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608301 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608311 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608336 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608112 5099 generic.go:358] "Generic (PLEG): container finished" podID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" containerID="1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8" exitCode=143 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608349 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608461 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608471 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608480 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608486 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608491 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608496 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608502 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608507 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608511 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608517 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608524 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608531 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608538 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608546 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608551 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608555 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608560 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608565 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608570 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608575 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608580 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608587 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5glsp" event={"ID":"0fd18053-827f-48f8-b64b-4cc0035ce4ad","Type":"ContainerDied","Data":"9ef85c87cbec67f964cac674fb4487676e0cb4ade90699c1a7c3fc97b0c20bd2"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608595 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608600 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608607 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608612 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608617 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608622 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608627 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608632 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.608636 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.612058 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" event={"ID":"2611467e-9640-4cb8-9428-eb0557ff9b00","Type":"ContainerStarted","Data":"6dcf72c1b80a344cc0784a5cfce3c0781df20f5e9c0c72e2073a1d9da48bae44"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.613469 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"37ab666852ae10a519b0496c4b4fffc6885e89ee8557c92d7b3f132050ec66d7"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.615483 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.615548 5099 generic.go:358] "Generic (PLEG): container finished" podID="76a2810e-710e-4f57-90b7-23d7bdfea6d8" containerID="249a7eb4b7d07ea613a98c70245960a8bfaf3ad27af9656d70abc8520710242a" exitCode=2 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.615637 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2sj6" event={"ID":"76a2810e-710e-4f57-90b7-23d7bdfea6d8","Type":"ContainerDied","Data":"249a7eb4b7d07ea613a98c70245960a8bfaf3ad27af9656d70abc8520710242a"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.616385 5099 scope.go:117] "RemoveContainer" containerID="249a7eb4b7d07ea613a98c70245960a8bfaf3ad27af9656d70abc8520710242a" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618604 5099 generic.go:358] "Generic (PLEG): container finished" podID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerID="9030538b595a5e84a0ed1bdc56f6c85298752b7926790fcd28916206a603082f" exitCode=0 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618637 5099 generic.go:358] "Generic (PLEG): container finished" podID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" containerID="453368b5a00381fd4926645b701f4fd9842e83e204725f69ad50faa4122d2e6f" exitCode=0 Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618736 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" event={"ID":"052c66d7-f3c6-4f4b-97e0-70e9e533308c","Type":"ContainerDied","Data":"9030538b595a5e84a0ed1bdc56f6c85298752b7926790fcd28916206a603082f"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618763 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9030538b595a5e84a0ed1bdc56f6c85298752b7926790fcd28916206a603082f"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618775 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"453368b5a00381fd4926645b701f4fd9842e83e204725f69ad50faa4122d2e6f"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618788 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" event={"ID":"052c66d7-f3c6-4f4b-97e0-70e9e533308c","Type":"ContainerDied","Data":"453368b5a00381fd4926645b701f4fd9842e83e204725f69ad50faa4122d2e6f"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618800 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9030538b595a5e84a0ed1bdc56f6c85298752b7926790fcd28916206a603082f"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618887 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"453368b5a00381fd4926645b701f4fd9842e83e204725f69ad50faa4122d2e6f"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618905 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" event={"ID":"052c66d7-f3c6-4f4b-97e0-70e9e533308c","Type":"ContainerDied","Data":"0aa84869e63a7b068c8633ff0cfaf9a6cba717286d178777651d1ff358b55dac"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618917 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9030538b595a5e84a0ed1bdc56f6c85298752b7926790fcd28916206a603082f"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.618929 5099 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"453368b5a00381fd4926645b701f4fd9842e83e204725f69ad50faa4122d2e6f"} Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.619145 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.648002 5099 scope.go:117] "RemoveContainer" containerID="766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.669977 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5glsp"] Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.670033 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5glsp"] Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.677020 5099 scope.go:117] "RemoveContainer" containerID="57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.684090 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm"] Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.689008 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-lswnm"] Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.705045 5099 scope.go:117] "RemoveContainer" containerID="9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.727908 5099 scope.go:117] "RemoveContainer" containerID="03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.754571 5099 scope.go:117] "RemoveContainer" containerID="0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.793111 5099 scope.go:117] "RemoveContainer" containerID="2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.806141 5099 scope.go:117] "RemoveContainer" containerID="1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.852710 5099 scope.go:117] "RemoveContainer" containerID="7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.867446 5099 scope.go:117] "RemoveContainer" containerID="e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.868465 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": container with ID starting with e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6 not found: ID does not exist" containerID="e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.868538 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} err="failed to get container status \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": rpc error: code = NotFound desc = could not find container \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": container with ID starting with e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.868567 5099 scope.go:117] "RemoveContainer" containerID="766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.868982 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": container with ID starting with 766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884 not found: ID does not exist" containerID="766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.869014 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} err="failed to get container status \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": rpc error: code = NotFound desc = could not find container \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": container with ID starting with 766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.869033 5099 scope.go:117] "RemoveContainer" containerID="57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.869284 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": container with ID starting with 57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a not found: ID does not exist" containerID="57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.869337 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} err="failed to get container status \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": rpc error: code = NotFound desc = could not find container \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": container with ID starting with 57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.869354 5099 scope.go:117] "RemoveContainer" containerID="9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.869582 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": container with ID starting with 9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630 not found: ID does not exist" containerID="9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.869617 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} err="failed to get container status \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": rpc error: code = NotFound desc = could not find container \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": container with ID starting with 9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.869656 5099 scope.go:117] "RemoveContainer" containerID="03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.870015 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": container with ID starting with 03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46 not found: ID does not exist" containerID="03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.870070 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} err="failed to get container status \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": rpc error: code = NotFound desc = could not find container \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": container with ID starting with 03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.870089 5099 scope.go:117] "RemoveContainer" containerID="0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.870348 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": container with ID starting with 0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e not found: ID does not exist" containerID="0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.870410 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} err="failed to get container status \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": rpc error: code = NotFound desc = could not find container \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": container with ID starting with 0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.870428 5099 scope.go:117] "RemoveContainer" containerID="2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.870717 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": container with ID starting with 2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b not found: ID does not exist" containerID="2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.870836 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} err="failed to get container status \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": rpc error: code = NotFound desc = could not find container \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": container with ID starting with 2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.870937 5099 scope.go:117] "RemoveContainer" containerID="1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.871347 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": container with ID starting with 1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8 not found: ID does not exist" containerID="1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.871439 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} err="failed to get container status \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": rpc error: code = NotFound desc = could not find container \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": container with ID starting with 1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.871526 5099 scope.go:117] "RemoveContainer" containerID="7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021" Dec 12 15:37:20 crc kubenswrapper[5099]: E1212 15:37:20.872209 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": container with ID starting with 7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021 not found: ID does not exist" containerID="7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.872267 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} err="failed to get container status \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": rpc error: code = NotFound desc = could not find container \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": container with ID starting with 7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.872285 5099 scope.go:117] "RemoveContainer" containerID="e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.872854 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} err="failed to get container status \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": rpc error: code = NotFound desc = could not find container \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": container with ID starting with e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.872880 5099 scope.go:117] "RemoveContainer" containerID="766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.873103 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} err="failed to get container status \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": rpc error: code = NotFound desc = could not find container \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": container with ID starting with 766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.873124 5099 scope.go:117] "RemoveContainer" containerID="57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.873415 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} err="failed to get container status \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": rpc error: code = NotFound desc = could not find container \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": container with ID starting with 57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.873436 5099 scope.go:117] "RemoveContainer" containerID="9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.873711 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} err="failed to get container status \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": rpc error: code = NotFound desc = could not find container \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": container with ID starting with 9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.873736 5099 scope.go:117] "RemoveContainer" containerID="03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.873972 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} err="failed to get container status \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": rpc error: code = NotFound desc = could not find container \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": container with ID starting with 03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.873995 5099 scope.go:117] "RemoveContainer" containerID="0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.874207 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} err="failed to get container status \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": rpc error: code = NotFound desc = could not find container \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": container with ID starting with 0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.874228 5099 scope.go:117] "RemoveContainer" containerID="2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.874382 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} err="failed to get container status \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": rpc error: code = NotFound desc = could not find container \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": container with ID starting with 2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.874403 5099 scope.go:117] "RemoveContainer" containerID="1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.874614 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} err="failed to get container status \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": rpc error: code = NotFound desc = could not find container \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": container with ID starting with 1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.874636 5099 scope.go:117] "RemoveContainer" containerID="7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.874883 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} err="failed to get container status \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": rpc error: code = NotFound desc = could not find container \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": container with ID starting with 7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.874907 5099 scope.go:117] "RemoveContainer" containerID="e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.875099 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} err="failed to get container status \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": rpc error: code = NotFound desc = could not find container \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": container with ID starting with e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.875120 5099 scope.go:117] "RemoveContainer" containerID="766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.875287 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} err="failed to get container status \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": rpc error: code = NotFound desc = could not find container \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": container with ID starting with 766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.875312 5099 scope.go:117] "RemoveContainer" containerID="57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876129 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} err="failed to get container status \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": rpc error: code = NotFound desc = could not find container \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": container with ID starting with 57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876152 5099 scope.go:117] "RemoveContainer" containerID="9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876341 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} err="failed to get container status \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": rpc error: code = NotFound desc = could not find container \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": container with ID starting with 9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876363 5099 scope.go:117] "RemoveContainer" containerID="03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876532 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} err="failed to get container status \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": rpc error: code = NotFound desc = could not find container \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": container with ID starting with 03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876553 5099 scope.go:117] "RemoveContainer" containerID="0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876780 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} err="failed to get container status \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": rpc error: code = NotFound desc = could not find container \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": container with ID starting with 0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876804 5099 scope.go:117] "RemoveContainer" containerID="2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876978 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} err="failed to get container status \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": rpc error: code = NotFound desc = could not find container \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": container with ID starting with 2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.876999 5099 scope.go:117] "RemoveContainer" containerID="1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.877188 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} err="failed to get container status \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": rpc error: code = NotFound desc = could not find container \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": container with ID starting with 1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.877239 5099 scope.go:117] "RemoveContainer" containerID="7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.877414 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} err="failed to get container status \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": rpc error: code = NotFound desc = could not find container \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": container with ID starting with 7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.877438 5099 scope.go:117] "RemoveContainer" containerID="e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.877721 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} err="failed to get container status \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": rpc error: code = NotFound desc = could not find container \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": container with ID starting with e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.877745 5099 scope.go:117] "RemoveContainer" containerID="766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.877970 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} err="failed to get container status \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": rpc error: code = NotFound desc = could not find container \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": container with ID starting with 766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.877992 5099 scope.go:117] "RemoveContainer" containerID="57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.878144 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} err="failed to get container status \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": rpc error: code = NotFound desc = could not find container \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": container with ID starting with 57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.878166 5099 scope.go:117] "RemoveContainer" containerID="9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.878309 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} err="failed to get container status \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": rpc error: code = NotFound desc = could not find container \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": container with ID starting with 9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.878329 5099 scope.go:117] "RemoveContainer" containerID="03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.878571 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} err="failed to get container status \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": rpc error: code = NotFound desc = could not find container \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": container with ID starting with 03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.878594 5099 scope.go:117] "RemoveContainer" containerID="0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.878852 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} err="failed to get container status \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": rpc error: code = NotFound desc = could not find container \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": container with ID starting with 0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.878876 5099 scope.go:117] "RemoveContainer" containerID="2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879097 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b"} err="failed to get container status \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": rpc error: code = NotFound desc = could not find container \"2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b\": container with ID starting with 2e17fad196b6528615799d6894c62ddef397304bbfe1837802ccc19ca074452b not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879118 5099 scope.go:117] "RemoveContainer" containerID="1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879279 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8"} err="failed to get container status \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": rpc error: code = NotFound desc = could not find container \"1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8\": container with ID starting with 1ea5e55167289c56d97fc2e6243b12f0a126a7a53302f0476ec65058e9e159d8 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879302 5099 scope.go:117] "RemoveContainer" containerID="7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879451 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021"} err="failed to get container status \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": rpc error: code = NotFound desc = could not find container \"7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021\": container with ID starting with 7836a149a06499ec143ab635f652550d64664681efaaecff517f1ba103761021 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879472 5099 scope.go:117] "RemoveContainer" containerID="e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879611 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6"} err="failed to get container status \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": rpc error: code = NotFound desc = could not find container \"e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6\": container with ID starting with e2159160bd47697440d80754a36b90f4ed6f4c93d7660daabc9873bc8f6598f6 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879632 5099 scope.go:117] "RemoveContainer" containerID="766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.879977 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884"} err="failed to get container status \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": rpc error: code = NotFound desc = could not find container \"766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884\": container with ID starting with 766efa5e6c86827f4f2a7b610fb6febdbe60ca56a97cfa53c35ae31f74409884 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.880000 5099 scope.go:117] "RemoveContainer" containerID="57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.880172 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a"} err="failed to get container status \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": rpc error: code = NotFound desc = could not find container \"57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a\": container with ID starting with 57b0db5eb1729505ebb7a57e96345e27f9946197b6e9caddd1c96bb82e809c0a not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.880192 5099 scope.go:117] "RemoveContainer" containerID="9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.880334 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630"} err="failed to get container status \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": rpc error: code = NotFound desc = could not find container \"9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630\": container with ID starting with 9895742dc20759e17987c433758788e43fad28cb0226309829d85f8c9bdf5630 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.880356 5099 scope.go:117] "RemoveContainer" containerID="03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.880503 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46"} err="failed to get container status \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": rpc error: code = NotFound desc = could not find container \"03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46\": container with ID starting with 03ecb020782c84b1c0a8f44c4da5d16ab3cd6c226c74f4070ea2d0c1d1752c46 not found: ID does not exist" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.880525 5099 scope.go:117] "RemoveContainer" containerID="0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e" Dec 12 15:37:20 crc kubenswrapper[5099]: I1212 15:37:20.880762 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e"} err="failed to get container status \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": rpc error: code = NotFound desc = could not find container \"0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e\": container with ID starting with 0a6b8deb028c486841d4ca8e51411a6d0cb294cc92d2c77c8738842911c0021e not found: ID does not exist" Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.291474 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.292514 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.341887 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.628562 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" event={"ID":"2611467e-9640-4cb8-9428-eb0557ff9b00","Type":"ContainerStarted","Data":"bd11a7350e30d6af0e42050dc8a20f0e31a4d8972fe46decbb489c2374bd12a3"} Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.630480 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" event={"ID":"2611467e-9640-4cb8-9428-eb0557ff9b00","Type":"ContainerStarted","Data":"0dcb3262ca560912efcbccb8014326db76aaca92e1ced3bac0122d2952a5c9b2"} Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.631343 5099 generic.go:358] "Generic (PLEG): container finished" podID="7367272d-123c-46b8-bb11-7091c0bac810" containerID="08bec1df63e60ea260e7878cbab4228045099613e7c56aec2fadb13eca3c82bf" exitCode=0 Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.631509 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerDied","Data":"08bec1df63e60ea260e7878cbab4228045099613e7c56aec2fadb13eca3c82bf"} Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.634916 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.636144 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2sj6" event={"ID":"76a2810e-710e-4f57-90b7-23d7bdfea6d8","Type":"ContainerStarted","Data":"03020eac895aad370aef44d5557f1d11de3c1260c45003f121a81ec0f087489d"} Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.687732 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-6lz82" podStartSLOduration=2.6866106800000003 podStartE2EDuration="2.68661068s" podCreationTimestamp="2025-12-12 15:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:37:21.655884933 +0000 UTC m=+979.759793574" watchObservedRunningTime="2025-12-12 15:37:21.68661068 +0000 UTC m=+979.790519321" Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.711305 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:21 crc kubenswrapper[5099]: I1212 15:37:21.784679 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzk99"] Dec 12 15:37:22 crc kubenswrapper[5099]: I1212 15:37:22.478238 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052c66d7-f3c6-4f4b-97e0-70e9e533308c" path="/var/lib/kubelet/pods/052c66d7-f3c6-4f4b-97e0-70e9e533308c/volumes" Dec 12 15:37:22 crc kubenswrapper[5099]: I1212 15:37:22.480093 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd18053-827f-48f8-b64b-4cc0035ce4ad" path="/var/lib/kubelet/pods/0fd18053-827f-48f8-b64b-4cc0035ce4ad/volumes" Dec 12 15:37:22 crc kubenswrapper[5099]: I1212 15:37:22.646324 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"4b440ec07a9a055ade707542a72f15815265c2b3a5f4f62bbbad10561161944c"} Dec 12 15:37:22 crc kubenswrapper[5099]: I1212 15:37:22.646371 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"264d5b13e6aea8d3f2b7279c3b11d49e7421aedc6dd8c2fd2ed23af3060c2645"} Dec 12 15:37:22 crc kubenswrapper[5099]: I1212 15:37:22.646383 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"5aa03186be7d24772ee43aba099f48c49655f85777ec4d1002606082338d00c1"} Dec 12 15:37:22 crc kubenswrapper[5099]: I1212 15:37:22.646394 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"c2d5fe6ea21c79ef1907f301231e621a29ead179056dc5e71bfed329ee0c7a8c"} Dec 12 15:37:22 crc kubenswrapper[5099]: I1212 15:37:22.646403 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"2d91cecc9c01647cde939241fa5f91671811b6b43340c5e28c8cb26c1e05e860"} Dec 12 15:37:22 crc kubenswrapper[5099]: I1212 15:37:22.646416 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"64fc66e5806ec3fb3451139f4cfead86e15feaedce61879ada6cc72fdbb05939"} Dec 12 15:37:23 crc kubenswrapper[5099]: I1212 15:37:23.652881 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vzk99" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerName="registry-server" containerID="cri-o://c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5" gracePeriod=2 Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.528049 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.531455 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-catalog-content\") pod \"d78a5551-18ac-4da9-bfca-841e1b08af84\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.531533 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv4n5\" (UniqueName: \"kubernetes.io/projected/d78a5551-18ac-4da9-bfca-841e1b08af84-kube-api-access-hv4n5\") pod \"d78a5551-18ac-4da9-bfca-841e1b08af84\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.531566 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-utilities\") pod \"d78a5551-18ac-4da9-bfca-841e1b08af84\" (UID: \"d78a5551-18ac-4da9-bfca-841e1b08af84\") " Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.534907 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-utilities" (OuterVolumeSpecName: "utilities") pod "d78a5551-18ac-4da9-bfca-841e1b08af84" (UID: "d78a5551-18ac-4da9-bfca-841e1b08af84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.539373 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78a5551-18ac-4da9-bfca-841e1b08af84-kube-api-access-hv4n5" (OuterVolumeSpecName: "kube-api-access-hv4n5") pod "d78a5551-18ac-4da9-bfca-841e1b08af84" (UID: "d78a5551-18ac-4da9-bfca-841e1b08af84"). InnerVolumeSpecName "kube-api-access-hv4n5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.584390 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d78a5551-18ac-4da9-bfca-841e1b08af84" (UID: "d78a5551-18ac-4da9-bfca-841e1b08af84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.639383 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hv4n5\" (UniqueName: \"kubernetes.io/projected/d78a5551-18ac-4da9-bfca-841e1b08af84-kube-api-access-hv4n5\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.639787 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.639802 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a5551-18ac-4da9-bfca-841e1b08af84-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.661503 5099 generic.go:358] "Generic (PLEG): container finished" podID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerID="c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5" exitCode=0 Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.661545 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzk99" event={"ID":"d78a5551-18ac-4da9-bfca-841e1b08af84","Type":"ContainerDied","Data":"c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5"} Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.661587 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzk99" event={"ID":"d78a5551-18ac-4da9-bfca-841e1b08af84","Type":"ContainerDied","Data":"df9fa4c2278550ba6c1ba1f741b74a6961031fb687e07d21fee7259438169772"} Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.661611 5099 scope.go:117] "RemoveContainer" containerID="c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.661642 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzk99" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.678878 5099 scope.go:117] "RemoveContainer" containerID="fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.703734 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzk99"] Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.710544 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vzk99"] Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.713136 5099 scope.go:117] "RemoveContainer" containerID="3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.731798 5099 scope.go:117] "RemoveContainer" containerID="c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5" Dec 12 15:37:24 crc kubenswrapper[5099]: E1212 15:37:24.732396 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5\": container with ID starting with c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5 not found: ID does not exist" containerID="c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.732448 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5"} err="failed to get container status \"c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5\": rpc error: code = NotFound desc = could not find container \"c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5\": container with ID starting with c293c5faf894adb7ee9fb425eada6ad89c9c20b08bf5a85bf16860a1ccf8aca5 not found: ID does not exist" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.732504 5099 scope.go:117] "RemoveContainer" containerID="fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49" Dec 12 15:37:24 crc kubenswrapper[5099]: E1212 15:37:24.733035 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49\": container with ID starting with fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49 not found: ID does not exist" containerID="fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.733101 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49"} err="failed to get container status \"fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49\": rpc error: code = NotFound desc = could not find container \"fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49\": container with ID starting with fc8d585c942850ec8a82941c0b15072315ebac5692b11d5703ff3a341b7e0c49 not found: ID does not exist" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.733134 5099 scope.go:117] "RemoveContainer" containerID="3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760" Dec 12 15:37:24 crc kubenswrapper[5099]: E1212 15:37:24.733779 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760\": container with ID starting with 3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760 not found: ID does not exist" containerID="3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760" Dec 12 15:37:24 crc kubenswrapper[5099]: I1212 15:37:24.733811 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760"} err="failed to get container status \"3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760\": rpc error: code = NotFound desc = could not find container \"3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760\": container with ID starting with 3432a2206addf26da6f6574375a4a8f0565ccfddb1d418d0ea89eec9e33a1760 not found: ID does not exist" Dec 12 15:37:25 crc kubenswrapper[5099]: I1212 15:37:25.796022 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"1c7570056e5f945d0e5ac9ff672af63b6fbbca8d64a68d313e87761a6c2c08ca"} Dec 12 15:37:26 crc kubenswrapper[5099]: I1212 15:37:26.476078 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" path="/var/lib/kubelet/pods/d78a5551-18ac-4da9-bfca-841e1b08af84/volumes" Dec 12 15:37:28 crc kubenswrapper[5099]: I1212 15:37:28.827503 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" event={"ID":"7367272d-123c-46b8-bb11-7091c0bac810","Type":"ContainerStarted","Data":"0dbc45fef48c43dc32c7ee9ff021970f95aa72f8b4aaeda58e9097e7d0ecb5e6"} Dec 12 15:37:28 crc kubenswrapper[5099]: I1212 15:37:28.829626 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:28 crc kubenswrapper[5099]: I1212 15:37:28.829653 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:28 crc kubenswrapper[5099]: I1212 15:37:28.829718 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:28 crc kubenswrapper[5099]: I1212 15:37:28.861623 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:28 crc kubenswrapper[5099]: I1212 15:37:28.870273 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" podStartSLOduration=8.87025437 podStartE2EDuration="8.87025437s" podCreationTimestamp="2025-12-12 15:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:37:28.862385968 +0000 UTC m=+986.966294629" watchObservedRunningTime="2025-12-12 15:37:28.87025437 +0000 UTC m=+986.974163021" Dec 12 15:37:28 crc kubenswrapper[5099]: I1212 15:37:28.870863 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:37:46 crc kubenswrapper[5099]: I1212 15:37:46.515209 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:37:46 crc kubenswrapper[5099]: I1212 15:37:46.515821 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:37:57 crc kubenswrapper[5099]: E1212 15:37:57.137411 5099 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.169383 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.181475 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.197314 5099 ???:1] "http: TLS handshake error from 192.168.126.11:52828: no serving certificate available for the kubelet" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.225403 5099 ???:1] "http: TLS handshake error from 192.168.126.11:52834: no serving certificate available for the kubelet" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.258069 5099 ???:1] "http: TLS handshake error from 192.168.126.11:52846: no serving certificate available for the kubelet" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.306043 5099 ???:1] "http: TLS handshake error from 192.168.126.11:52848: no serving certificate available for the kubelet" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.367980 5099 ???:1] "http: TLS handshake error from 192.168.126.11:52856: no serving certificate available for the kubelet" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.478454 5099 ???:1] "http: TLS handshake error from 192.168.126.11:52870: no serving certificate available for the kubelet" Dec 12 15:37:59 crc kubenswrapper[5099]: I1212 15:37:59.665459 5099 ???:1] "http: TLS handshake error from 192.168.126.11:52878: no serving certificate available for the kubelet" Dec 12 15:38:00 crc kubenswrapper[5099]: I1212 15:38:00.008290 5099 ???:1] "http: TLS handshake error from 192.168.126.11:52890: no serving certificate available for the kubelet" Dec 12 15:38:00 crc kubenswrapper[5099]: I1212 15:38:00.673531 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57030: no serving certificate available for the kubelet" Dec 12 15:38:01 crc kubenswrapper[5099]: I1212 15:38:01.089562 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdxfq" Dec 12 15:38:01 crc kubenswrapper[5099]: I1212 15:38:01.972765 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57038: no serving certificate available for the kubelet" Dec 12 15:38:03 crc kubenswrapper[5099]: I1212 15:38:03.661893 5099 scope.go:117] "RemoveContainer" containerID="453368b5a00381fd4926645b701f4fd9842e83e204725f69ad50faa4122d2e6f" Dec 12 15:38:03 crc kubenswrapper[5099]: I1212 15:38:03.690978 5099 scope.go:117] "RemoveContainer" containerID="9030538b595a5e84a0ed1bdc56f6c85298752b7926790fcd28916206a603082f" Dec 12 15:38:04 crc kubenswrapper[5099]: I1212 15:38:04.558445 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57040: no serving certificate available for the kubelet" Dec 12 15:38:09 crc kubenswrapper[5099]: I1212 15:38:09.703164 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57044: no serving certificate available for the kubelet" Dec 12 15:38:16 crc kubenswrapper[5099]: I1212 15:38:16.515797 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:38:16 crc kubenswrapper[5099]: I1212 15:38:16.516555 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:38:16 crc kubenswrapper[5099]: I1212 15:38:16.516679 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:38:16 crc kubenswrapper[5099]: I1212 15:38:16.517365 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bc47be49afe36eec207faef72696c0e0a3816019790ed023f832a2c46d18ea7"} pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:38:16 crc kubenswrapper[5099]: I1212 15:38:16.517449 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" containerID="cri-o://3bc47be49afe36eec207faef72696c0e0a3816019790ed023f832a2c46d18ea7" gracePeriod=600 Dec 12 15:38:17 crc kubenswrapper[5099]: I1212 15:38:17.426442 5099 generic.go:358] "Generic (PLEG): container finished" podID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerID="3bc47be49afe36eec207faef72696c0e0a3816019790ed023f832a2c46d18ea7" exitCode=0 Dec 12 15:38:17 crc kubenswrapper[5099]: I1212 15:38:17.426530 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerDied","Data":"3bc47be49afe36eec207faef72696c0e0a3816019790ed023f832a2c46d18ea7"} Dec 12 15:38:17 crc kubenswrapper[5099]: I1212 15:38:17.427467 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"e46f729345ce4b4127ff28338116252ec1967c1ded006dff545ba615b93a08c0"} Dec 12 15:38:17 crc kubenswrapper[5099]: I1212 15:38:17.427509 5099 scope.go:117] "RemoveContainer" containerID="fca3695db7f4ff2571fcdc7f89f68e183f2fb74b803aed241480609969af109b" Dec 12 15:38:19 crc kubenswrapper[5099]: I1212 15:38:19.971013 5099 ???:1] "http: TLS handshake error from 192.168.126.11:48436: no serving certificate available for the kubelet" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.641911 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sbnzv"] Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.643161 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerName="extract-content" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.643180 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerName="extract-content" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.643206 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerName="extract-utilities" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.643212 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerName="extract-utilities" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.643228 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerName="registry-server" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.643233 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerName="registry-server" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.643322 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="d78a5551-18ac-4da9-bfca-841e1b08af84" containerName="registry-server" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.768092 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbnzv"] Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.768235 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.852250 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-catalog-content\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.853039 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz44n\" (UniqueName: \"kubernetes.io/projected/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-kube-api-access-vz44n\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.853166 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-utilities\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.954235 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-catalog-content\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.954313 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vz44n\" (UniqueName: \"kubernetes.io/projected/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-kube-api-access-vz44n\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.954338 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-utilities\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.954809 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-catalog-content\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.955327 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-utilities\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:33 crc kubenswrapper[5099]: I1212 15:38:33.979113 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz44n\" (UniqueName: \"kubernetes.io/projected/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-kube-api-access-vz44n\") pod \"redhat-marketplace-sbnzv\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:34 crc kubenswrapper[5099]: I1212 15:38:34.096744 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:34 crc kubenswrapper[5099]: I1212 15:38:34.590819 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbnzv"] Dec 12 15:38:35 crc kubenswrapper[5099]: I1212 15:38:35.574121 5099 generic.go:358] "Generic (PLEG): container finished" podID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerID="4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35" exitCode=0 Dec 12 15:38:35 crc kubenswrapper[5099]: I1212 15:38:35.574190 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbnzv" event={"ID":"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe","Type":"ContainerDied","Data":"4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35"} Dec 12 15:38:35 crc kubenswrapper[5099]: I1212 15:38:35.574628 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbnzv" event={"ID":"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe","Type":"ContainerStarted","Data":"bd76049fb05c6d6b38cc5a209038b537a7b5010c8254e198e749007a84143f45"} Dec 12 15:38:36 crc kubenswrapper[5099]: I1212 15:38:36.582146 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbnzv" event={"ID":"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe","Type":"ContainerStarted","Data":"4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae"} Dec 12 15:38:37 crc kubenswrapper[5099]: I1212 15:38:37.590239 5099 generic.go:358] "Generic (PLEG): container finished" podID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerID="4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae" exitCode=0 Dec 12 15:38:37 crc kubenswrapper[5099]: I1212 15:38:37.590464 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbnzv" event={"ID":"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe","Type":"ContainerDied","Data":"4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae"} Dec 12 15:38:38 crc kubenswrapper[5099]: I1212 15:38:38.600062 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbnzv" event={"ID":"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe","Type":"ContainerStarted","Data":"8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370"} Dec 12 15:38:38 crc kubenswrapper[5099]: I1212 15:38:38.620164 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sbnzv" podStartSLOduration=4.799337184 podStartE2EDuration="5.62014333s" podCreationTimestamp="2025-12-12 15:38:33 +0000 UTC" firstStartedPulling="2025-12-12 15:38:35.576094107 +0000 UTC m=+1053.680002748" lastFinishedPulling="2025-12-12 15:38:36.396900253 +0000 UTC m=+1054.500808894" observedRunningTime="2025-12-12 15:38:38.617048901 +0000 UTC m=+1056.720957572" watchObservedRunningTime="2025-12-12 15:38:38.62014333 +0000 UTC m=+1056.724051971" Dec 12 15:38:40 crc kubenswrapper[5099]: I1212 15:38:40.479170 5099 ???:1] "http: TLS handshake error from 192.168.126.11:38348: no serving certificate available for the kubelet" Dec 12 15:38:44 crc kubenswrapper[5099]: I1212 15:38:44.096986 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:44 crc kubenswrapper[5099]: I1212 15:38:44.097130 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:44 crc kubenswrapper[5099]: I1212 15:38:44.187298 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:44 crc kubenswrapper[5099]: I1212 15:38:44.673945 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:44 crc kubenswrapper[5099]: I1212 15:38:44.848572 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbnzv"] Dec 12 15:38:44 crc kubenswrapper[5099]: I1212 15:38:44.876602 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt9c8"] Dec 12 15:38:44 crc kubenswrapper[5099]: I1212 15:38:44.876994 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bt9c8" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerName="registry-server" containerID="cri-o://28b211f9836bf827c42dcd09babfaf82432a705ed4f2ebd0bcb014198d559459" gracePeriod=30 Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.648212 5099 generic.go:358] "Generic (PLEG): container finished" podID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerID="28b211f9836bf827c42dcd09babfaf82432a705ed4f2ebd0bcb014198d559459" exitCode=0 Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.649160 5099 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/redhat-marketplace-sbnzv" secret="" err="secret \"redhat-marketplace-dockercfg-gg4w7\" not found" Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.648426 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt9c8" event={"ID":"3abb2de8-01ab-4808-86ec-35fba75a4cfc","Type":"ContainerDied","Data":"28b211f9836bf827c42dcd09babfaf82432a705ed4f2ebd0bcb014198d559459"} Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.766089 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.843185 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-catalog-content\") pod \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.843278 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-utilities\") pod \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.843304 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6htfs\" (UniqueName: \"kubernetes.io/projected/3abb2de8-01ab-4808-86ec-35fba75a4cfc-kube-api-access-6htfs\") pod \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\" (UID: \"3abb2de8-01ab-4808-86ec-35fba75a4cfc\") " Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.845287 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-utilities" (OuterVolumeSpecName: "utilities") pod "3abb2de8-01ab-4808-86ec-35fba75a4cfc" (UID: "3abb2de8-01ab-4808-86ec-35fba75a4cfc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.852337 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3abb2de8-01ab-4808-86ec-35fba75a4cfc-kube-api-access-6htfs" (OuterVolumeSpecName: "kube-api-access-6htfs") pod "3abb2de8-01ab-4808-86ec-35fba75a4cfc" (UID: "3abb2de8-01ab-4808-86ec-35fba75a4cfc"). InnerVolumeSpecName "kube-api-access-6htfs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.860200 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3abb2de8-01ab-4808-86ec-35fba75a4cfc" (UID: "3abb2de8-01ab-4808-86ec-35fba75a4cfc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.944962 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.945014 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6htfs\" (UniqueName: \"kubernetes.io/projected/3abb2de8-01ab-4808-86ec-35fba75a4cfc-kube-api-access-6htfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:45 crc kubenswrapper[5099]: I1212 15:38:45.945033 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abb2de8-01ab-4808-86ec-35fba75a4cfc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.063098 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-5w47q"] Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.063962 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerName="extract-utilities" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.064003 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerName="extract-utilities" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.064028 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerName="extract-content" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.064038 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerName="extract-content" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.064065 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerName="registry-server" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.064074 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerName="registry-server" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.064230 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" containerName="registry-server" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.072085 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.080081 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-5w47q"] Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.275255 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/448d4822-a39f-4a33-a521-74a100040c4b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.275314 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvb9\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-kube-api-access-2rvb9\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.275464 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/448d4822-a39f-4a33-a521-74a100040c4b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.275554 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.275640 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/448d4822-a39f-4a33-a521-74a100040c4b-trusted-ca\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.275725 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-registry-tls\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.275786 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.275834 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/448d4822-a39f-4a33-a521-74a100040c4b-registry-certificates\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.300752 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.377491 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/448d4822-a39f-4a33-a521-74a100040c4b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.377553 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.377607 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/448d4822-a39f-4a33-a521-74a100040c4b-trusted-ca\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.377693 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-registry-tls\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.377727 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/448d4822-a39f-4a33-a521-74a100040c4b-registry-certificates\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.377794 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/448d4822-a39f-4a33-a521-74a100040c4b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.377825 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2rvb9\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-kube-api-access-2rvb9\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.378455 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/448d4822-a39f-4a33-a521-74a100040c4b-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.379192 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/448d4822-a39f-4a33-a521-74a100040c4b-trusted-ca\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.379438 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/448d4822-a39f-4a33-a521-74a100040c4b-registry-certificates\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.388778 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-registry-tls\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.389487 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/448d4822-a39f-4a33-a521-74a100040c4b-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.396466 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-bound-sa-token\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.396828 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rvb9\" (UniqueName: \"kubernetes.io/projected/448d4822-a39f-4a33-a521-74a100040c4b-kube-api-access-2rvb9\") pod \"image-registry-5d9d95bf5b-5w47q\" (UID: \"448d4822-a39f-4a33-a521-74a100040c4b\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.474052 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.658611 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sbnzv" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerName="registry-server" containerID="cri-o://8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370" gracePeriod=2 Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.658894 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt9c8" event={"ID":"3abb2de8-01ab-4808-86ec-35fba75a4cfc","Type":"ContainerDied","Data":"faf54b301098c3dd73795588583561ba18124f451181b8fa06b05b108940a75e"} Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.658940 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bt9c8" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.659317 5099 scope.go:117] "RemoveContainer" containerID="28b211f9836bf827c42dcd09babfaf82432a705ed4f2ebd0bcb014198d559459" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.764237 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt9c8"] Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.773424 5099 scope.go:117] "RemoveContainer" containerID="8abb74e1c6a7277ffc6612793490538bc2470c53d9cf1cf0357d84a34abaf76f" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.777056 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt9c8"] Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.799881 5099 scope.go:117] "RemoveContainer" containerID="10eff7e520e0b1e16af33e0113be6ba800c1a5176c94551664e760504d333998" Dec 12 15:38:46 crc kubenswrapper[5099]: I1212 15:38:46.802549 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-5w47q"] Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.497685 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.573609 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-catalog-content\") pod \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.573776 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-utilities\") pod \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.573803 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz44n\" (UniqueName: \"kubernetes.io/projected/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-kube-api-access-vz44n\") pod \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\" (UID: \"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe\") " Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.575781 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-utilities" (OuterVolumeSpecName: "utilities") pod "7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" (UID: "7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.581751 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-kube-api-access-vz44n" (OuterVolumeSpecName: "kube-api-access-vz44n") pod "7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" (UID: "7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe"). InnerVolumeSpecName "kube-api-access-vz44n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.585401 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" (UID: "7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.669785 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" event={"ID":"448d4822-a39f-4a33-a521-74a100040c4b","Type":"ContainerStarted","Data":"4fd35d6f7779104a085a69aeae7926a99488a783300c3a2cb485603e186e6199"} Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.669921 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" event={"ID":"448d4822-a39f-4a33-a521-74a100040c4b","Type":"ContainerStarted","Data":"0d2b9feef88f6bb790a93c0a594bc57180c469889c8a6bd96c00398d013099e2"} Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.671329 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.676706 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.676750 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.676764 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vz44n\" (UniqueName: \"kubernetes.io/projected/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe-kube-api-access-vz44n\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.678531 5099 generic.go:358] "Generic (PLEG): container finished" podID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerID="8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370" exitCode=0 Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.678612 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbnzv" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.678563 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbnzv" event={"ID":"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe","Type":"ContainerDied","Data":"8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370"} Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.678694 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbnzv" event={"ID":"7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe","Type":"ContainerDied","Data":"bd76049fb05c6d6b38cc5a209038b537a7b5010c8254e198e749007a84143f45"} Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.678716 5099 scope.go:117] "RemoveContainer" containerID="8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.695982 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" podStartSLOduration=1.6959576840000001 podStartE2EDuration="1.695957684s" podCreationTimestamp="2025-12-12 15:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:38:47.693419859 +0000 UTC m=+1065.797328510" watchObservedRunningTime="2025-12-12 15:38:47.695957684 +0000 UTC m=+1065.799866325" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.703146 5099 scope.go:117] "RemoveContainer" containerID="4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.716030 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbnzv"] Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.718366 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbnzv"] Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.745624 5099 scope.go:117] "RemoveContainer" containerID="4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.764623 5099 scope.go:117] "RemoveContainer" containerID="8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370" Dec 12 15:38:47 crc kubenswrapper[5099]: E1212 15:38:47.765238 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370\": container with ID starting with 8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370 not found: ID does not exist" containerID="8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.765275 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370"} err="failed to get container status \"8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370\": rpc error: code = NotFound desc = could not find container \"8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370\": container with ID starting with 8474c36eacb6c69a6fd63e948dfbd64dacd9139f46f273182d2b765772571370 not found: ID does not exist" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.765319 5099 scope.go:117] "RemoveContainer" containerID="4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae" Dec 12 15:38:47 crc kubenswrapper[5099]: E1212 15:38:47.765500 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae\": container with ID starting with 4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae not found: ID does not exist" containerID="4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.765535 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae"} err="failed to get container status \"4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae\": rpc error: code = NotFound desc = could not find container \"4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae\": container with ID starting with 4848896bb02403b212c8c79aa02a75f22fdc62c984fc40eabde88e453474cfae not found: ID does not exist" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.765552 5099 scope.go:117] "RemoveContainer" containerID="4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35" Dec 12 15:38:47 crc kubenswrapper[5099]: E1212 15:38:47.766272 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35\": container with ID starting with 4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35 not found: ID does not exist" containerID="4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35" Dec 12 15:38:47 crc kubenswrapper[5099]: I1212 15:38:47.766302 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35"} err="failed to get container status \"4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35\": rpc error: code = NotFound desc = could not find container \"4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35\": container with ID starting with 4013c641f5179b7d67ccd272af9b6ce6350c08deac8d5495cca514f3a858fb35 not found: ID does not exist" Dec 12 15:38:48 crc kubenswrapper[5099]: I1212 15:38:48.477768 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3abb2de8-01ab-4808-86ec-35fba75a4cfc" path="/var/lib/kubelet/pods/3abb2de8-01ab-4808-86ec-35fba75a4cfc/volumes" Dec 12 15:38:48 crc kubenswrapper[5099]: I1212 15:38:48.478896 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" path="/var/lib/kubelet/pods/7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe/volumes" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.063894 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt"] Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.064467 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerName="extract-content" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.064481 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerName="extract-content" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.064503 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerName="extract-utilities" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.064511 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerName="extract-utilities" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.064518 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerName="registry-server" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.064524 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerName="registry-server" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.064610 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f8b49cd-5457-47f2-8fc5-f830c3f9c7fe" containerName="registry-server" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.068433 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.071139 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.080571 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt"] Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.270493 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.270550 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.270573 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zwxq\" (UniqueName: \"kubernetes.io/projected/cc57602e-f3ee-4f43-a4da-a31edfbc562b-kube-api-access-8zwxq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.372447 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.372990 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.373016 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8zwxq\" (UniqueName: \"kubernetes.io/projected/cc57602e-f3ee-4f43-a4da-a31edfbc562b-kube-api-access-8zwxq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.373332 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.373332 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.402274 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zwxq\" (UniqueName: \"kubernetes.io/projected/cc57602e-f3ee-4f43-a4da-a31edfbc562b-kube-api-access-8zwxq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.686848 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:49 crc kubenswrapper[5099]: I1212 15:38:49.906868 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt"] Dec 12 15:38:50 crc kubenswrapper[5099]: I1212 15:38:50.701771 5099 generic.go:358] "Generic (PLEG): container finished" podID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerID="5ca83a4819d17a0817c7681a2e8d9511ecf356cc9ebd1fd2831f923e6af64e05" exitCode=0 Dec 12 15:38:50 crc kubenswrapper[5099]: I1212 15:38:50.702022 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" event={"ID":"cc57602e-f3ee-4f43-a4da-a31edfbc562b","Type":"ContainerDied","Data":"5ca83a4819d17a0817c7681a2e8d9511ecf356cc9ebd1fd2831f923e6af64e05"} Dec 12 15:38:50 crc kubenswrapper[5099]: I1212 15:38:50.702458 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" event={"ID":"cc57602e-f3ee-4f43-a4da-a31edfbc562b","Type":"ContainerStarted","Data":"e36e8f91e651ea8cec13610d72be6de091fd44cbb926b3b4c5faadac6c0c5eed"} Dec 12 15:38:51 crc kubenswrapper[5099]: I1212 15:38:51.831076 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2pp8p"] Dec 12 15:38:51 crc kubenswrapper[5099]: I1212 15:38:51.840550 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:51 crc kubenswrapper[5099]: I1212 15:38:51.847776 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2pp8p"] Dec 12 15:38:51 crc kubenswrapper[5099]: I1212 15:38:51.907189 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-utilities\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:51 crc kubenswrapper[5099]: I1212 15:38:51.907287 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5q9\" (UniqueName: \"kubernetes.io/projected/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-kube-api-access-lg5q9\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:51 crc kubenswrapper[5099]: I1212 15:38:51.907616 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-catalog-content\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.039739 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-catalog-content\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.039793 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-utilities\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.039817 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lg5q9\" (UniqueName: \"kubernetes.io/projected/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-kube-api-access-lg5q9\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.040596 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-catalog-content\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.041098 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-utilities\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.102226 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg5q9\" (UniqueName: \"kubernetes.io/projected/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-kube-api-access-lg5q9\") pod \"redhat-operators-2pp8p\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.172003 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.713372 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2pp8p"] Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.724594 5099 generic.go:358] "Generic (PLEG): container finished" podID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerID="3428aa1bb9f68a074c649f19d3b5c00b18199b6583326be8bcf93a7bdd8a9ca7" exitCode=0 Dec 12 15:38:52 crc kubenswrapper[5099]: I1212 15:38:52.724673 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" event={"ID":"cc57602e-f3ee-4f43-a4da-a31edfbc562b","Type":"ContainerDied","Data":"3428aa1bb9f68a074c649f19d3b5c00b18199b6583326be8bcf93a7bdd8a9ca7"} Dec 12 15:38:52 crc kubenswrapper[5099]: W1212 15:38:52.726497 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eaf9ae5_19b5_4b4c_871f_f143fae31b09.slice/crio-c60be26cfea21da3c541838b6877bcbbde0ab8eb2310f65c629290351ea4fec3 WatchSource:0}: Error finding container c60be26cfea21da3c541838b6877bcbbde0ab8eb2310f65c629290351ea4fec3: Status 404 returned error can't find the container with id c60be26cfea21da3c541838b6877bcbbde0ab8eb2310f65c629290351ea4fec3 Dec 12 15:38:53 crc kubenswrapper[5099]: I1212 15:38:53.735043 5099 generic.go:358] "Generic (PLEG): container finished" podID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerID="da2884c793b4dac430904ac2fe2c4f500b6e26a253482ad9edd0531e295cb571" exitCode=0 Dec 12 15:38:53 crc kubenswrapper[5099]: I1212 15:38:53.735269 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" event={"ID":"cc57602e-f3ee-4f43-a4da-a31edfbc562b","Type":"ContainerDied","Data":"da2884c793b4dac430904ac2fe2c4f500b6e26a253482ad9edd0531e295cb571"} Dec 12 15:38:53 crc kubenswrapper[5099]: I1212 15:38:53.736906 5099 generic.go:358] "Generic (PLEG): container finished" podID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerID="077375cf66bce3c74812a090cd5ab1d1c451c8c0307b30cfde018561d31f3a50" exitCode=0 Dec 12 15:38:53 crc kubenswrapper[5099]: I1212 15:38:53.736981 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pp8p" event={"ID":"8eaf9ae5-19b5-4b4c-871f-f143fae31b09","Type":"ContainerDied","Data":"077375cf66bce3c74812a090cd5ab1d1c451c8c0307b30cfde018561d31f3a50"} Dec 12 15:38:53 crc kubenswrapper[5099]: I1212 15:38:53.737021 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pp8p" event={"ID":"8eaf9ae5-19b5-4b4c-871f-f143fae31b09","Type":"ContainerStarted","Data":"c60be26cfea21da3c541838b6877bcbbde0ab8eb2310f65c629290351ea4fec3"} Dec 12 15:38:54 crc kubenswrapper[5099]: I1212 15:38:54.754348 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pp8p" event={"ID":"8eaf9ae5-19b5-4b4c-871f-f143fae31b09","Type":"ContainerStarted","Data":"9935597817d9d07767721103a2cc57c0a26d369afbbe1f8f5973336b0657f3bd"} Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.064499 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.142341 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-util\") pod \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.142424 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zwxq\" (UniqueName: \"kubernetes.io/projected/cc57602e-f3ee-4f43-a4da-a31edfbc562b-kube-api-access-8zwxq\") pod \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.142557 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-bundle\") pod \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\" (UID: \"cc57602e-f3ee-4f43-a4da-a31edfbc562b\") " Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.144677 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-bundle" (OuterVolumeSpecName: "bundle") pod "cc57602e-f3ee-4f43-a4da-a31edfbc562b" (UID: "cc57602e-f3ee-4f43-a4da-a31edfbc562b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.148109 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc57602e-f3ee-4f43-a4da-a31edfbc562b-kube-api-access-8zwxq" (OuterVolumeSpecName: "kube-api-access-8zwxq") pod "cc57602e-f3ee-4f43-a4da-a31edfbc562b" (UID: "cc57602e-f3ee-4f43-a4da-a31edfbc562b"). InnerVolumeSpecName "kube-api-access-8zwxq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.155372 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-util" (OuterVolumeSpecName: "util") pod "cc57602e-f3ee-4f43-a4da-a31edfbc562b" (UID: "cc57602e-f3ee-4f43-a4da-a31edfbc562b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.243846 5099 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.243902 5099 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc57602e-f3ee-4f43-a4da-a31edfbc562b-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.243912 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8zwxq\" (UniqueName: \"kubernetes.io/projected/cc57602e-f3ee-4f43-a4da-a31edfbc562b-kube-api-access-8zwxq\") on node \"crc\" DevicePath \"\"" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.815277 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" event={"ID":"cc57602e-f3ee-4f43-a4da-a31edfbc562b","Type":"ContainerDied","Data":"e36e8f91e651ea8cec13610d72be6de091fd44cbb926b3b4c5faadac6c0c5eed"} Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.815356 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e36e8f91e651ea8cec13610d72be6de091fd44cbb926b3b4c5faadac6c0c5eed" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.815312 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210c54tt" Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.816904 5099 generic.go:358] "Generic (PLEG): container finished" podID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerID="9935597817d9d07767721103a2cc57c0a26d369afbbe1f8f5973336b0657f3bd" exitCode=0 Dec 12 15:38:55 crc kubenswrapper[5099]: I1212 15:38:55.816963 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pp8p" event={"ID":"8eaf9ae5-19b5-4b4c-871f-f143fae31b09","Type":"ContainerDied","Data":"9935597817d9d07767721103a2cc57c0a26d369afbbe1f8f5973336b0657f3bd"} Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.095350 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld"] Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.096422 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerName="extract" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.096441 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerName="extract" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.096455 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerName="pull" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.096460 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerName="pull" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.096497 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerName="util" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.096503 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerName="util" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.096604 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="cc57602e-f3ee-4f43-a4da-a31edfbc562b" containerName="extract" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.103402 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.105983 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.110411 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.110471 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgjb\" (UniqueName: \"kubernetes.io/projected/48a2afd5-b263-42d8-9b5c-ad8c140059b7-kube-api-access-tkgjb\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.110727 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.118473 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld"] Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.211960 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.212033 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkgjb\" (UniqueName: \"kubernetes.io/projected/48a2afd5-b263-42d8-9b5c-ad8c140059b7-kube-api-access-tkgjb\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.212145 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.212616 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.212779 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.236434 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkgjb\" (UniqueName: \"kubernetes.io/projected/48a2afd5-b263-42d8-9b5c-ad8c140059b7-kube-api-access-tkgjb\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.422993 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.668799 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld"] Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.966270 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" event={"ID":"48a2afd5-b263-42d8-9b5c-ad8c140059b7","Type":"ContainerStarted","Data":"2d0d4225ce9ae1fb9304b9f480fa2cee3288939ceee64f8a02e6bb81b8a9f7aa"} Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.966335 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" event={"ID":"48a2afd5-b263-42d8-9b5c-ad8c140059b7","Type":"ContainerStarted","Data":"712f5ece7fbfd22a0e292fddd9875b7ccebff8536a2931ec1056d07db2b9d883"} Dec 12 15:38:56 crc kubenswrapper[5099]: I1212 15:38:56.969476 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pp8p" event={"ID":"8eaf9ae5-19b5-4b4c-871f-f143fae31b09","Type":"ContainerStarted","Data":"6403409b77f6d9668bff12f85526293822482bb8cf0548b6bd14d8ddf60562a0"} Dec 12 15:38:57 crc kubenswrapper[5099]: I1212 15:38:57.008999 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2pp8p" podStartSLOduration=5.347335833 podStartE2EDuration="6.008982305s" podCreationTimestamp="2025-12-12 15:38:51 +0000 UTC" firstStartedPulling="2025-12-12 15:38:53.737892468 +0000 UTC m=+1071.841801109" lastFinishedPulling="2025-12-12 15:38:54.39953894 +0000 UTC m=+1072.503447581" observedRunningTime="2025-12-12 15:38:57.005338193 +0000 UTC m=+1075.109246844" watchObservedRunningTime="2025-12-12 15:38:57.008982305 +0000 UTC m=+1075.112890946" Dec 12 15:38:57 crc kubenswrapper[5099]: I1212 15:38:57.977871 5099 generic.go:358] "Generic (PLEG): container finished" podID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerID="2d0d4225ce9ae1fb9304b9f480fa2cee3288939ceee64f8a02e6bb81b8a9f7aa" exitCode=0 Dec 12 15:38:57 crc kubenswrapper[5099]: I1212 15:38:57.977943 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" event={"ID":"48a2afd5-b263-42d8-9b5c-ad8c140059b7","Type":"ContainerDied","Data":"2d0d4225ce9ae1fb9304b9f480fa2cee3288939ceee64f8a02e6bb81b8a9f7aa"} Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.105463 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" event={"ID":"48a2afd5-b263-42d8-9b5c-ad8c140059b7","Type":"ContainerStarted","Data":"3ef19dfd7127ade7260d423fc3d74062e31685fd006a37165faffd1e16d4aadd"} Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.125227 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj"] Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.497776 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj"] Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.497941 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.522050 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.522119 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwrvn\" (UniqueName: \"kubernetes.io/projected/46b6820d-ed06-4c04-b184-342ecf49990d-kube-api-access-qwrvn\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.522285 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.726525 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.726593 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qwrvn\" (UniqueName: \"kubernetes.io/projected/46b6820d-ed06-4c04-b184-342ecf49990d-kube-api-access-qwrvn\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.726640 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.727294 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.730162 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:01 crc kubenswrapper[5099]: I1212 15:39:01.874409 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwrvn\" (UniqueName: \"kubernetes.io/projected/46b6820d-ed06-4c04-b184-342ecf49990d-kube-api-access-qwrvn\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:02 crc kubenswrapper[5099]: I1212 15:39:02.116618 5099 generic.go:358] "Generic (PLEG): container finished" podID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerID="3ef19dfd7127ade7260d423fc3d74062e31685fd006a37165faffd1e16d4aadd" exitCode=0 Dec 12 15:39:02 crc kubenswrapper[5099]: I1212 15:39:02.116947 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" event={"ID":"48a2afd5-b263-42d8-9b5c-ad8c140059b7","Type":"ContainerDied","Data":"3ef19dfd7127ade7260d423fc3d74062e31685fd006a37165faffd1e16d4aadd"} Dec 12 15:39:02 crc kubenswrapper[5099]: I1212 15:39:02.118534 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:02 crc kubenswrapper[5099]: I1212 15:39:02.173927 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:39:02 crc kubenswrapper[5099]: I1212 15:39:02.173982 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:39:02 crc kubenswrapper[5099]: I1212 15:39:02.678881 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj"] Dec 12 15:39:03 crc kubenswrapper[5099]: I1212 15:39:03.123868 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" event={"ID":"46b6820d-ed06-4c04-b184-342ecf49990d","Type":"ContainerStarted","Data":"5964ada1d999fe8fb2ec6954eaeececbb23d548c2897a49f54869e8847557337"} Dec 12 15:39:03 crc kubenswrapper[5099]: I1212 15:39:03.244647 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pp8p" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="registry-server" probeResult="failure" output=< Dec 12 15:39:03 crc kubenswrapper[5099]: timeout: failed to connect service ":50051" within 1s Dec 12 15:39:03 crc kubenswrapper[5099]: > Dec 12 15:39:04 crc kubenswrapper[5099]: I1212 15:39:04.183032 5099 generic.go:358] "Generic (PLEG): container finished" podID="46b6820d-ed06-4c04-b184-342ecf49990d" containerID="0c4fa2473c6deb2fa4ba69bbb3b83b9a60f7c2d54feda1a2ea4ca106a29f9d41" exitCode=0 Dec 12 15:39:04 crc kubenswrapper[5099]: I1212 15:39:04.183187 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" event={"ID":"46b6820d-ed06-4c04-b184-342ecf49990d","Type":"ContainerDied","Data":"0c4fa2473c6deb2fa4ba69bbb3b83b9a60f7c2d54feda1a2ea4ca106a29f9d41"} Dec 12 15:39:04 crc kubenswrapper[5099]: I1212 15:39:04.318748 5099 generic.go:358] "Generic (PLEG): container finished" podID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerID="4e6ba372d0687c15d2618020b273bd75141b555efd33b6e2e0e7503514f19356" exitCode=0 Dec 12 15:39:04 crc kubenswrapper[5099]: I1212 15:39:04.318945 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" event={"ID":"48a2afd5-b263-42d8-9b5c-ad8c140059b7","Type":"ContainerDied","Data":"4e6ba372d0687c15d2618020b273bd75141b555efd33b6e2e0e7503514f19356"} Dec 12 15:39:05 crc kubenswrapper[5099]: I1212 15:39:05.966640 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.145648 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-bundle\") pod \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.146129 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-util\") pod \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.146211 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkgjb\" (UniqueName: \"kubernetes.io/projected/48a2afd5-b263-42d8-9b5c-ad8c140059b7-kube-api-access-tkgjb\") pod \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\" (UID: \"48a2afd5-b263-42d8-9b5c-ad8c140059b7\") " Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.146953 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-bundle" (OuterVolumeSpecName: "bundle") pod "48a2afd5-b263-42d8-9b5c-ad8c140059b7" (UID: "48a2afd5-b263-42d8-9b5c-ad8c140059b7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.166858 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a2afd5-b263-42d8-9b5c-ad8c140059b7-kube-api-access-tkgjb" (OuterVolumeSpecName: "kube-api-access-tkgjb") pod "48a2afd5-b263-42d8-9b5c-ad8c140059b7" (UID: "48a2afd5-b263-42d8-9b5c-ad8c140059b7"). InnerVolumeSpecName "kube-api-access-tkgjb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.172227 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-util" (OuterVolumeSpecName: "util") pod "48a2afd5-b263-42d8-9b5c-ad8c140059b7" (UID: "48a2afd5-b263-42d8-9b5c-ad8c140059b7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.247703 5099 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.247749 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkgjb\" (UniqueName: \"kubernetes.io/projected/48a2afd5-b263-42d8-9b5c-ad8c140059b7-kube-api-access-tkgjb\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.247759 5099 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/48a2afd5-b263-42d8-9b5c-ad8c140059b7-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.326528 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-hvpjw"] Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.327339 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerName="pull" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.327365 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerName="pull" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.327386 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerName="util" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.327393 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerName="util" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.327407 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerName="extract" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.327417 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerName="extract" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.327540 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="48a2afd5-b263-42d8-9b5c-ad8c140059b7" containerName="extract" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.336830 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-hvpjw" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.348188 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.348362 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-2l48k\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.348480 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.354998 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-hvpjw"] Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.360497 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" event={"ID":"48a2afd5-b263-42d8-9b5c-ad8c140059b7","Type":"ContainerDied","Data":"712f5ece7fbfd22a0e292fddd9875b7ccebff8536a2931ec1056d07db2b9d883"} Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.360540 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="712f5ece7fbfd22a0e292fddd9875b7ccebff8536a2931ec1056d07db2b9d883" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.360635 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew6wld" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.450189 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqctr\" (UniqueName: \"kubernetes.io/projected/e34148d5-7214-46e0-954a-da6783c4452e-kube-api-access-rqctr\") pod \"obo-prometheus-operator-86648f486b-hvpjw\" (UID: \"e34148d5-7214-46e0-954a-da6783c4452e\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-hvpjw" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.606213 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rqctr\" (UniqueName: \"kubernetes.io/projected/e34148d5-7214-46e0-954a-da6783c4452e-kube-api-access-rqctr\") pod \"obo-prometheus-operator-86648f486b-hvpjw\" (UID: \"e34148d5-7214-46e0-954a-da6783c4452e\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-hvpjw" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.623875 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l"] Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.630234 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqctr\" (UniqueName: \"kubernetes.io/projected/e34148d5-7214-46e0-954a-da6783c4452e-kube-api-access-rqctr\") pod \"obo-prometheus-operator-86648f486b-hvpjw\" (UID: \"e34148d5-7214-46e0-954a-da6783c4452e\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-hvpjw" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.638488 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.646286 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-8gtd6\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.646996 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l"] Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.651020 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.656899 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm"] Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.659475 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-hvpjw" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.673287 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.704922 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm"] Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.719696 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a94857-c430-470a-9b3e-0b596f68a51f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-92mqm\" (UID: \"50a94857-c430-470a-9b3e-0b596f68a51f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.719873 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a94857-c430-470a-9b3e-0b596f68a51f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-92mqm\" (UID: \"50a94857-c430-470a-9b3e-0b596f68a51f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.719930 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b455acf1-483f-49dd-8c2c-cec434d03423-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-lss6l\" (UID: \"b455acf1-483f-49dd-8c2c-cec434d03423\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.720014 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b455acf1-483f-49dd-8c2c-cec434d03423-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-lss6l\" (UID: \"b455acf1-483f-49dd-8c2c-cec434d03423\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.803412 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-9tqdp"] Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.814837 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.822859 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a94857-c430-470a-9b3e-0b596f68a51f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-92mqm\" (UID: \"50a94857-c430-470a-9b3e-0b596f68a51f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.822935 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a94857-c430-470a-9b3e-0b596f68a51f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-92mqm\" (UID: \"50a94857-c430-470a-9b3e-0b596f68a51f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.822965 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b455acf1-483f-49dd-8c2c-cec434d03423-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-lss6l\" (UID: \"b455acf1-483f-49dd-8c2c-cec434d03423\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.822970 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-kmllj\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.822996 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b455acf1-483f-49dd-8c2c-cec434d03423-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-lss6l\" (UID: \"b455acf1-483f-49dd-8c2c-cec434d03423\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.825350 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.831244 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b455acf1-483f-49dd-8c2c-cec434d03423-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-lss6l\" (UID: \"b455acf1-483f-49dd-8c2c-cec434d03423\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.838712 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a94857-c430-470a-9b3e-0b596f68a51f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-92mqm\" (UID: \"50a94857-c430-470a-9b3e-0b596f68a51f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.838913 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a94857-c430-470a-9b3e-0b596f68a51f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-92mqm\" (UID: \"50a94857-c430-470a-9b3e-0b596f68a51f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.840641 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-9tqdp"] Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.848645 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b455acf1-483f-49dd-8c2c-cec434d03423-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6966b8c496-lss6l\" (UID: \"b455acf1-483f-49dd-8c2c-cec434d03423\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.925509 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xx6z\" (UniqueName: \"kubernetes.io/projected/b073f65e-ae67-4334-be5c-55bf6afac003-kube-api-access-4xx6z\") pod \"observability-operator-78c97476f4-9tqdp\" (UID: \"b073f65e-ae67-4334-be5c-55bf6afac003\") " pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.925727 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b073f65e-ae67-4334-be5c-55bf6afac003-observability-operator-tls\") pod \"observability-operator-78c97476f4-9tqdp\" (UID: \"b073f65e-ae67-4334-be5c-55bf6afac003\") " pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:06 crc kubenswrapper[5099]: I1212 15:39:06.974039 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-hvpjw"] Dec 12 15:39:06 crc kubenswrapper[5099]: W1212 15:39:06.993095 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode34148d5_7214_46e0_954a_da6783c4452e.slice/crio-612741b77e2b178d79ef44c7b521d035fd6e989f4dacfab48d1c256b9158e17a WatchSource:0}: Error finding container 612741b77e2b178d79ef44c7b521d035fd6e989f4dacfab48d1c256b9158e17a: Status 404 returned error can't find the container with id 612741b77e2b178d79ef44c7b521d035fd6e989f4dacfab48d1c256b9158e17a Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.026604 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.032244 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b073f65e-ae67-4334-be5c-55bf6afac003-observability-operator-tls\") pod \"observability-operator-78c97476f4-9tqdp\" (UID: \"b073f65e-ae67-4334-be5c-55bf6afac003\") " pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.032335 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4xx6z\" (UniqueName: \"kubernetes.io/projected/b073f65e-ae67-4334-be5c-55bf6afac003-kube-api-access-4xx6z\") pod \"observability-operator-78c97476f4-9tqdp\" (UID: \"b073f65e-ae67-4334-be5c-55bf6afac003\") " pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.035208 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.041453 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-vv99b"] Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.053342 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.061731 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-vv99b"] Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.084144 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b073f65e-ae67-4334-be5c-55bf6afac003-observability-operator-tls\") pod \"observability-operator-78c97476f4-9tqdp\" (UID: \"b073f65e-ae67-4334-be5c-55bf6afac003\") " pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.084465 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-lqsxq\"" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.090579 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xx6z\" (UniqueName: \"kubernetes.io/projected/b073f65e-ae67-4334-be5c-55bf6afac003-kube-api-access-4xx6z\") pod \"observability-operator-78c97476f4-9tqdp\" (UID: \"b073f65e-ae67-4334-be5c-55bf6afac003\") " pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.133314 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5d58a52-7e78-451c-93a7-5d488a81f971-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-vv99b\" (UID: \"d5d58a52-7e78-451c-93a7-5d488a81f971\") " pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.133400 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwq5\" (UniqueName: \"kubernetes.io/projected/d5d58a52-7e78-451c-93a7-5d488a81f971-kube-api-access-rbwq5\") pod \"perses-operator-68bdb49cbf-vv99b\" (UID: \"d5d58a52-7e78-451c-93a7-5d488a81f971\") " pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.158616 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.236658 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5d58a52-7e78-451c-93a7-5d488a81f971-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-vv99b\" (UID: \"d5d58a52-7e78-451c-93a7-5d488a81f971\") " pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.236825 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwq5\" (UniqueName: \"kubernetes.io/projected/d5d58a52-7e78-451c-93a7-5d488a81f971-kube-api-access-rbwq5\") pod \"perses-operator-68bdb49cbf-vv99b\" (UID: \"d5d58a52-7e78-451c-93a7-5d488a81f971\") " pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.238594 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5d58a52-7e78-451c-93a7-5d488a81f971-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-vv99b\" (UID: \"d5d58a52-7e78-451c-93a7-5d488a81f971\") " pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.262787 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwq5\" (UniqueName: \"kubernetes.io/projected/d5d58a52-7e78-451c-93a7-5d488a81f971-kube-api-access-rbwq5\") pod \"perses-operator-68bdb49cbf-vv99b\" (UID: \"d5d58a52-7e78-451c-93a7-5d488a81f971\") " pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.390876 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l"] Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.391415 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-hvpjw" event={"ID":"e34148d5-7214-46e0-954a-da6783c4452e","Type":"ContainerStarted","Data":"612741b77e2b178d79ef44c7b521d035fd6e989f4dacfab48d1c256b9158e17a"} Dec 12 15:39:07 crc kubenswrapper[5099]: W1212 15:39:07.408531 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb455acf1_483f_49dd_8c2c_cec434d03423.slice/crio-c8c56f1f39e44d7bc8b44da91a70203313f16e197389d34200c649de25efd2b3 WatchSource:0}: Error finding container c8c56f1f39e44d7bc8b44da91a70203313f16e197389d34200c649de25efd2b3: Status 404 returned error can't find the container with id c8c56f1f39e44d7bc8b44da91a70203313f16e197389d34200c649de25efd2b3 Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.435086 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.474319 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm"] Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.548062 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-9tqdp"] Dec 12 15:39:07 crc kubenswrapper[5099]: W1212 15:39:07.556840 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb073f65e_ae67_4334_be5c_55bf6afac003.slice/crio-7097b2ea968b3085a54f61f1f70340136fae91bbc132e1c41730843ab025f07a WatchSource:0}: Error finding container 7097b2ea968b3085a54f61f1f70340136fae91bbc132e1c41730843ab025f07a: Status 404 returned error can't find the container with id 7097b2ea968b3085a54f61f1f70340136fae91bbc132e1c41730843ab025f07a Dec 12 15:39:07 crc kubenswrapper[5099]: I1212 15:39:07.731308 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-vv99b"] Dec 12 15:39:07 crc kubenswrapper[5099]: W1212 15:39:07.740962 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5d58a52_7e78_451c_93a7_5d488a81f971.slice/crio-d957d31a4ef29417f79025348c3a221999235fa045beac5a77e1659387c220f6 WatchSource:0}: Error finding container d957d31a4ef29417f79025348c3a221999235fa045beac5a77e1659387c220f6: Status 404 returned error can't find the container with id d957d31a4ef29417f79025348c3a221999235fa045beac5a77e1659387c220f6 Dec 12 15:39:08 crc kubenswrapper[5099]: I1212 15:39:08.401708 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" event={"ID":"50a94857-c430-470a-9b3e-0b596f68a51f","Type":"ContainerStarted","Data":"8297f2128865411c83f840bb40b7a6ceb91706e8b516096669d72ac26020a9ef"} Dec 12 15:39:08 crc kubenswrapper[5099]: I1212 15:39:08.403611 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" event={"ID":"b455acf1-483f-49dd-8c2c-cec434d03423","Type":"ContainerStarted","Data":"c8c56f1f39e44d7bc8b44da91a70203313f16e197389d34200c649de25efd2b3"} Dec 12 15:39:08 crc kubenswrapper[5099]: I1212 15:39:08.405882 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" event={"ID":"d5d58a52-7e78-451c-93a7-5d488a81f971","Type":"ContainerStarted","Data":"d957d31a4ef29417f79025348c3a221999235fa045beac5a77e1659387c220f6"} Dec 12 15:39:08 crc kubenswrapper[5099]: I1212 15:39:08.407150 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-9tqdp" event={"ID":"b073f65e-ae67-4334-be5c-55bf6afac003","Type":"ContainerStarted","Data":"7097b2ea968b3085a54f61f1f70340136fae91bbc132e1c41730843ab025f07a"} Dec 12 15:39:09 crc kubenswrapper[5099]: I1212 15:39:09.705362 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w47q" Dec 12 15:39:09 crc kubenswrapper[5099]: I1212 15:39:09.846858 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-g59fk"] Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.248986 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.346080 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.783171 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-f7587547f-flxx2"] Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.942479 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.958196 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.958445 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.958574 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-kmbsh\"" Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.958631 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 12 15:39:12 crc kubenswrapper[5099]: I1212 15:39:12.968977 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-f7587547f-flxx2"] Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.040570 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pl5\" (UniqueName: \"kubernetes.io/projected/d247f235-8d1c-4b3f-80af-f0279c56248e-kube-api-access-b5pl5\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.040624 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d247f235-8d1c-4b3f-80af-f0279c56248e-webhook-cert\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.040796 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d247f235-8d1c-4b3f-80af-f0279c56248e-apiservice-cert\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.141647 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d247f235-8d1c-4b3f-80af-f0279c56248e-webhook-cert\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.141790 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d247f235-8d1c-4b3f-80af-f0279c56248e-apiservice-cert\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.141878 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b5pl5\" (UniqueName: \"kubernetes.io/projected/d247f235-8d1c-4b3f-80af-f0279c56248e-kube-api-access-b5pl5\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.166510 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d247f235-8d1c-4b3f-80af-f0279c56248e-webhook-cert\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.177572 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d247f235-8d1c-4b3f-80af-f0279c56248e-apiservice-cert\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.185953 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5pl5\" (UniqueName: \"kubernetes.io/projected/d247f235-8d1c-4b3f-80af-f0279c56248e-kube-api-access-b5pl5\") pod \"elastic-operator-f7587547f-flxx2\" (UID: \"d247f235-8d1c-4b3f-80af-f0279c56248e\") " pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.295003 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-f7587547f-flxx2" Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.909112 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2pp8p"] Dec 12 15:39:13 crc kubenswrapper[5099]: I1212 15:39:13.910022 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2pp8p" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="registry-server" containerID="cri-o://6403409b77f6d9668bff12f85526293822482bb8cf0548b6bd14d8ddf60562a0" gracePeriod=2 Dec 12 15:39:14 crc kubenswrapper[5099]: I1212 15:39:14.519735 5099 generic.go:358] "Generic (PLEG): container finished" podID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerID="6403409b77f6d9668bff12f85526293822482bb8cf0548b6bd14d8ddf60562a0" exitCode=0 Dec 12 15:39:14 crc kubenswrapper[5099]: I1212 15:39:14.519893 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pp8p" event={"ID":"8eaf9ae5-19b5-4b4c-871f-f143fae31b09","Type":"ContainerDied","Data":"6403409b77f6d9668bff12f85526293822482bb8cf0548b6bd14d8ddf60562a0"} Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.389516 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.619156 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pp8p" event={"ID":"8eaf9ae5-19b5-4b4c-871f-f143fae31b09","Type":"ContainerDied","Data":"c60be26cfea21da3c541838b6877bcbbde0ab8eb2310f65c629290351ea4fec3"} Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.619262 5099 scope.go:117] "RemoveContainer" containerID="6403409b77f6d9668bff12f85526293822482bb8cf0548b6bd14d8ddf60562a0" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.619596 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pp8p" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.629034 5099 ???:1] "http: TLS handshake error from 192.168.126.11:37760: no serving certificate available for the kubelet" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.694103 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg5q9\" (UniqueName: \"kubernetes.io/projected/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-kube-api-access-lg5q9\") pod \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.694585 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-catalog-content\") pod \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.694691 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-utilities\") pod \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\" (UID: \"8eaf9ae5-19b5-4b4c-871f-f143fae31b09\") " Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.695995 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-utilities" (OuterVolumeSpecName: "utilities") pod "8eaf9ae5-19b5-4b4c-871f-f143fae31b09" (UID: "8eaf9ae5-19b5-4b4c-871f-f143fae31b09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.701888 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-kube-api-access-lg5q9" (OuterVolumeSpecName: "kube-api-access-lg5q9") pod "8eaf9ae5-19b5-4b4c-871f-f143fae31b09" (UID: "8eaf9ae5-19b5-4b4c-871f-f143fae31b09"). InnerVolumeSpecName "kube-api-access-lg5q9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.807311 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.807350 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lg5q9\" (UniqueName: \"kubernetes.io/projected/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-kube-api-access-lg5q9\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.882035 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eaf9ae5-19b5-4b4c-871f-f143fae31b09" (UID: "8eaf9ae5-19b5-4b4c-871f-f143fae31b09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.908496 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf9ae5-19b5-4b4c-871f-f143fae31b09-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:21 crc kubenswrapper[5099]: I1212 15:39:21.977530 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2pp8p"] Dec 12 15:39:22 crc kubenswrapper[5099]: I1212 15:39:22.002040 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2pp8p"] Dec 12 15:39:22 crc kubenswrapper[5099]: I1212 15:39:22.478130 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" path="/var/lib/kubelet/pods/8eaf9ae5-19b5-4b4c-871f-f143fae31b09/volumes" Dec 12 15:39:28 crc kubenswrapper[5099]: I1212 15:39:28.333142 5099 scope.go:117] "RemoveContainer" containerID="9935597817d9d07767721103a2cc57c0a26d369afbbe1f8f5973336b0657f3bd" Dec 12 15:39:28 crc kubenswrapper[5099]: I1212 15:39:28.550540 5099 scope.go:117] "RemoveContainer" containerID="077375cf66bce3c74812a090cd5ab1d1c451c8c0307b30cfde018561d31f3a50" Dec 12 15:39:28 crc kubenswrapper[5099]: I1212 15:39:28.900599 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-f7587547f-flxx2"] Dec 12 15:39:28 crc kubenswrapper[5099]: W1212 15:39:28.908580 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd247f235_8d1c_4b3f_80af_f0279c56248e.slice/crio-b5f86ec1c0fd4f2dcfb232dde9ae7e44e27486faf511d89f9ce96dd40ad6df8f WatchSource:0}: Error finding container b5f86ec1c0fd4f2dcfb232dde9ae7e44e27486faf511d89f9ce96dd40ad6df8f: Status 404 returned error can't find the container with id b5f86ec1c0fd4f2dcfb232dde9ae7e44e27486faf511d89f9ce96dd40ad6df8f Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.669272 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-9tqdp" event={"ID":"b073f65e-ae67-4334-be5c-55bf6afac003","Type":"ContainerStarted","Data":"af57beb85b6eb508d9edd888dc7dd30fbe31de6bd33604c1eef6694749db1a89"} Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.670005 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.671632 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" event={"ID":"50a94857-c430-470a-9b3e-0b596f68a51f","Type":"ContainerStarted","Data":"5c1200cf92bf02e002b877d28b15f8f40393f2dd00f2e6089b5fc65a59f094f6"} Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.675292 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" event={"ID":"b455acf1-483f-49dd-8c2c-cec434d03423","Type":"ContainerStarted","Data":"b61b499b86371065fa6104072be526ed3600e454b65a32f65465d0dda2c46850"} Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.678566 5099 generic.go:358] "Generic (PLEG): container finished" podID="46b6820d-ed06-4c04-b184-342ecf49990d" containerID="b493a496bb7196c7cc65145b5addf723915c67cf47395745234a3c89cc678f4f" exitCode=0 Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.678791 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" event={"ID":"46b6820d-ed06-4c04-b184-342ecf49990d","Type":"ContainerDied","Data":"b493a496bb7196c7cc65145b5addf723915c67cf47395745234a3c89cc678f4f"} Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.681411 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" event={"ID":"d5d58a52-7e78-451c-93a7-5d488a81f971","Type":"ContainerStarted","Data":"a2de159e32a9824ab8b466a4223a914ae13c4459149b0d96c5e8a67557cd2f07"} Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.681508 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.682849 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-f7587547f-flxx2" event={"ID":"d247f235-8d1c-4b3f-80af-f0279c56248e","Type":"ContainerStarted","Data":"b5f86ec1c0fd4f2dcfb232dde9ae7e44e27486faf511d89f9ce96dd40ad6df8f"} Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.690059 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-hvpjw" event={"ID":"e34148d5-7214-46e0-954a-da6783c4452e","Type":"ContainerStarted","Data":"6ce39a1f968d0fc2ba13997f303b3593b08db16876811300d597eb4a78ff608f"} Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.781741 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-9tqdp" Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.796785 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-9tqdp" podStartSLOduration=2.813275311 podStartE2EDuration="23.796751796s" podCreationTimestamp="2025-12-12 15:39:06 +0000 UTC" firstStartedPulling="2025-12-12 15:39:07.567701215 +0000 UTC m=+1085.671609856" lastFinishedPulling="2025-12-12 15:39:28.5511777 +0000 UTC m=+1106.655086341" observedRunningTime="2025-12-12 15:39:29.79141668 +0000 UTC m=+1107.895325351" watchObservedRunningTime="2025-12-12 15:39:29.796751796 +0000 UTC m=+1107.900660447" Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.872899 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-lss6l" podStartSLOduration=2.745590026 podStartE2EDuration="23.872879885s" podCreationTimestamp="2025-12-12 15:39:06 +0000 UTC" firstStartedPulling="2025-12-12 15:39:07.424316072 +0000 UTC m=+1085.528224713" lastFinishedPulling="2025-12-12 15:39:28.551605931 +0000 UTC m=+1106.655514572" observedRunningTime="2025-12-12 15:39:29.868714469 +0000 UTC m=+1107.972623110" watchObservedRunningTime="2025-12-12 15:39:29.872879885 +0000 UTC m=+1107.976788526" Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.906091 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" podStartSLOduration=3.087078945 podStartE2EDuration="23.906070461s" podCreationTimestamp="2025-12-12 15:39:06 +0000 UTC" firstStartedPulling="2025-12-12 15:39:07.750644124 +0000 UTC m=+1085.854552765" lastFinishedPulling="2025-12-12 15:39:28.56963564 +0000 UTC m=+1106.673544281" observedRunningTime="2025-12-12 15:39:29.899698409 +0000 UTC m=+1108.003607060" watchObservedRunningTime="2025-12-12 15:39:29.906070461 +0000 UTC m=+1108.009979102" Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.936285 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6966b8c496-92mqm" podStartSLOduration=2.879505338 podStartE2EDuration="23.93626435s" podCreationTimestamp="2025-12-12 15:39:06 +0000 UTC" firstStartedPulling="2025-12-12 15:39:07.493741851 +0000 UTC m=+1085.597650492" lastFinishedPulling="2025-12-12 15:39:28.550500863 +0000 UTC m=+1106.654409504" observedRunningTime="2025-12-12 15:39:29.927149798 +0000 UTC m=+1108.031058449" watchObservedRunningTime="2025-12-12 15:39:29.93626435 +0000 UTC m=+1108.040172991" Dec 12 15:39:29 crc kubenswrapper[5099]: I1212 15:39:29.981793 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-hvpjw" podStartSLOduration=2.458461034 podStartE2EDuration="23.981769259s" podCreationTimestamp="2025-12-12 15:39:06 +0000 UTC" firstStartedPulling="2025-12-12 15:39:07.027209738 +0000 UTC m=+1085.131118379" lastFinishedPulling="2025-12-12 15:39:28.550517963 +0000 UTC m=+1106.654426604" observedRunningTime="2025-12-12 15:39:29.98142683 +0000 UTC m=+1108.085335481" watchObservedRunningTime="2025-12-12 15:39:29.981769259 +0000 UTC m=+1108.085677900" Dec 12 15:39:30 crc kubenswrapper[5099]: I1212 15:39:30.699643 5099 generic.go:358] "Generic (PLEG): container finished" podID="46b6820d-ed06-4c04-b184-342ecf49990d" containerID="15ea5ade7b9934e8b13f79fd29f836c2db52edae498c72d84f69b80c59e2d986" exitCode=0 Dec 12 15:39:30 crc kubenswrapper[5099]: I1212 15:39:30.700557 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" event={"ID":"46b6820d-ed06-4c04-b184-342ecf49990d","Type":"ContainerDied","Data":"15ea5ade7b9934e8b13f79fd29f836c2db52edae498c72d84f69b80c59e2d986"} Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.047289 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.176821 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwrvn\" (UniqueName: \"kubernetes.io/projected/46b6820d-ed06-4c04-b184-342ecf49990d-kube-api-access-qwrvn\") pod \"46b6820d-ed06-4c04-b184-342ecf49990d\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.176964 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-util\") pod \"46b6820d-ed06-4c04-b184-342ecf49990d\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.177066 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-bundle\") pod \"46b6820d-ed06-4c04-b184-342ecf49990d\" (UID: \"46b6820d-ed06-4c04-b184-342ecf49990d\") " Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.180869 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-bundle" (OuterVolumeSpecName: "bundle") pod "46b6820d-ed06-4c04-b184-342ecf49990d" (UID: "46b6820d-ed06-4c04-b184-342ecf49990d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.197525 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b6820d-ed06-4c04-b184-342ecf49990d-kube-api-access-qwrvn" (OuterVolumeSpecName: "kube-api-access-qwrvn") pod "46b6820d-ed06-4c04-b184-342ecf49990d" (UID: "46b6820d-ed06-4c04-b184-342ecf49990d"). InnerVolumeSpecName "kube-api-access-qwrvn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.203550 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-util" (OuterVolumeSpecName: "util") pod "46b6820d-ed06-4c04-b184-342ecf49990d" (UID: "46b6820d-ed06-4c04-b184-342ecf49990d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.323859 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qwrvn\" (UniqueName: \"kubernetes.io/projected/46b6820d-ed06-4c04-b184-342ecf49990d-kube-api-access-qwrvn\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.323892 5099 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.323901 5099 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46b6820d-ed06-4c04-b184-342ecf49990d-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.773586 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" event={"ID":"46b6820d-ed06-4c04-b184-342ecf49990d","Type":"ContainerDied","Data":"5964ada1d999fe8fb2ec6954eaeececbb23d548c2897a49f54869e8847557337"} Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.773639 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5964ada1d999fe8fb2ec6954eaeececbb23d548c2897a49f54869e8847557337" Dec 12 15:39:32 crc kubenswrapper[5099]: I1212 15:39:32.773753 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a92qcj" Dec 12 15:39:34 crc kubenswrapper[5099]: I1212 15:39:34.840406 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-f7587547f-flxx2" event={"ID":"d247f235-8d1c-4b3f-80af-f0279c56248e","Type":"ContainerStarted","Data":"40a5d306befafa303f18fbdc920280776faf24dae5555329b35fe988758d5a74"} Dec 12 15:39:34 crc kubenswrapper[5099]: I1212 15:39:34.954443 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-f7587547f-flxx2" podStartSLOduration=17.919441274 podStartE2EDuration="22.954428949s" podCreationTimestamp="2025-12-12 15:39:12 +0000 UTC" firstStartedPulling="2025-12-12 15:39:28.911929339 +0000 UTC m=+1107.015837980" lastFinishedPulling="2025-12-12 15:39:33.946917014 +0000 UTC m=+1112.050825655" observedRunningTime="2025-12-12 15:39:34.952679874 +0000 UTC m=+1113.056588525" watchObservedRunningTime="2025-12-12 15:39:34.954428949 +0000 UTC m=+1113.058337590" Dec 12 15:39:34 crc kubenswrapper[5099]: I1212 15:39:34.975304 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" podUID="a8221fb7-b435-4a06-8a6d-7bcc4afda383" containerName="registry" containerID="cri-o://34657fc68ce55220860421524cc9058b020291ff4143d2ad2b1abad87745bf13" gracePeriod=30 Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.569376 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570045 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="registry-server" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570067 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="registry-server" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570097 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46b6820d-ed06-4c04-b184-342ecf49990d" containerName="pull" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570103 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b6820d-ed06-4c04-b184-342ecf49990d" containerName="pull" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570109 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46b6820d-ed06-4c04-b184-342ecf49990d" containerName="extract" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570114 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b6820d-ed06-4c04-b184-342ecf49990d" containerName="extract" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570124 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="extract-content" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570129 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="extract-content" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570137 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46b6820d-ed06-4c04-b184-342ecf49990d" containerName="util" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570142 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b6820d-ed06-4c04-b184-342ecf49990d" containerName="util" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570152 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="extract-utilities" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570159 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="extract-utilities" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570256 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="46b6820d-ed06-4c04-b184-342ecf49990d" containerName="extract" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.570266 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="8eaf9ae5-19b5-4b4c-871f-f143fae31b09" containerName="registry-server" Dec 12 15:39:35 crc kubenswrapper[5099]: I1212 15:39:35.848310 5099 generic.go:358] "Generic (PLEG): container finished" podID="a8221fb7-b435-4a06-8a6d-7bcc4afda383" containerID="34657fc68ce55220860421524cc9058b020291ff4143d2ad2b1abad87745bf13" exitCode=0 Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.624857 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.628468 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.628478 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.628820 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.629082 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.629281 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-nrzzb\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.629624 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.629990 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.632393 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.632750 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.637238 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" event={"ID":"a8221fb7-b435-4a06-8a6d-7bcc4afda383","Type":"ContainerDied","Data":"34657fc68ce55220860421524cc9058b020291ff4143d2ad2b1abad87745bf13"} Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.637290 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.667539 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.667599 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.667689 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.667744 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.667770 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/4120dd22-dad3-4f57-b013-2ca0069cc8e6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.667808 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.667894 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.667969 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.668026 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.668051 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.668077 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.668161 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.668202 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.668240 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.668271 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.769734 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.769779 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.769837 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.769865 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.769891 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.769919 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.769953 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.770504 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.770547 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.770675 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.770887 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.770920 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/4120dd22-dad3-4f57-b013-2ca0069cc8e6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.770944 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.770983 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.771017 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.771051 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.771068 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.771084 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.771275 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.772403 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.773134 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.773519 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.774319 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.780567 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.780597 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.780597 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.780955 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.781209 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.782503 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/4120dd22-dad3-4f57-b013-2ca0069cc8e6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.793451 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/4120dd22-dad3-4f57-b013-2ca0069cc8e6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"4120dd22-dad3-4f57-b013-2ca0069cc8e6\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.819531 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.856373 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" event={"ID":"a8221fb7-b435-4a06-8a6d-7bcc4afda383","Type":"ContainerDied","Data":"7606fb3ee7897327f1fffc35c2dd6155ea872ca2c0865ee8105b11be06e8eb89"} Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.856397 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-g59fk" Dec 12 15:39:36 crc kubenswrapper[5099]: I1212 15:39:36.856449 5099 scope.go:117] "RemoveContainer" containerID="34657fc68ce55220860421524cc9058b020291ff4143d2ad2b1abad87745bf13" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.098282 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.104239 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a8221fb7-b435-4a06-8a6d-7bcc4afda383-installation-pull-secrets\") pod \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.104492 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.104546 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-bound-sa-token\") pod \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.105624 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-tls\") pod \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.105705 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-certificates\") pod \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.105763 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d9zl\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-kube-api-access-9d9zl\") pod \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.105796 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-trusted-ca\") pod \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.105831 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a8221fb7-b435-4a06-8a6d-7bcc4afda383-ca-trust-extracted\") pod \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\" (UID: \"a8221fb7-b435-4a06-8a6d-7bcc4afda383\") " Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.110701 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a8221fb7-b435-4a06-8a6d-7bcc4afda383" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.113698 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a8221fb7-b435-4a06-8a6d-7bcc4afda383" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.120887 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8221fb7-b435-4a06-8a6d-7bcc4afda383-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a8221fb7-b435-4a06-8a6d-7bcc4afda383" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.123317 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a8221fb7-b435-4a06-8a6d-7bcc4afda383" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.134858 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a8221fb7-b435-4a06-8a6d-7bcc4afda383" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.141179 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-kube-api-access-9d9zl" (OuterVolumeSpecName: "kube-api-access-9d9zl") pod "a8221fb7-b435-4a06-8a6d-7bcc4afda383" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383"). InnerVolumeSpecName "kube-api-access-9d9zl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.142488 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "a8221fb7-b435-4a06-8a6d-7bcc4afda383" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.167680 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8221fb7-b435-4a06-8a6d-7bcc4afda383-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a8221fb7-b435-4a06-8a6d-7bcc4afda383" (UID: "a8221fb7-b435-4a06-8a6d-7bcc4afda383"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.257350 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9d9zl\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-kube-api-access-9d9zl\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.259697 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.259825 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a8221fb7-b435-4a06-8a6d-7bcc4afda383-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.259929 5099 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a8221fb7-b435-4a06-8a6d-7bcc4afda383-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.260027 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.260117 5099 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.260209 5099 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a8221fb7-b435-4a06-8a6d-7bcc4afda383-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.542729 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-g59fk"] Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.561453 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-g59fk"] Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.683254 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:39:37 crc kubenswrapper[5099]: W1212 15:39:37.690958 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4120dd22_dad3_4f57_b013_2ca0069cc8e6.slice/crio-1b60fb74eb8de91bca967eb4108e3559fd0d8a81b1d89c7eddd59a9b939ab9f4 WatchSource:0}: Error finding container 1b60fb74eb8de91bca967eb4108e3559fd0d8a81b1d89c7eddd59a9b939ab9f4: Status 404 returned error can't find the container with id 1b60fb74eb8de91bca967eb4108e3559fd0d8a81b1d89c7eddd59a9b939ab9f4 Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.864504 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4120dd22-dad3-4f57-b013-2ca0069cc8e6","Type":"ContainerStarted","Data":"1b60fb74eb8de91bca967eb4108e3559fd0d8a81b1d89c7eddd59a9b939ab9f4"} Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.960442 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9"] Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.961115 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8221fb7-b435-4a06-8a6d-7bcc4afda383" containerName="registry" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.961137 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8221fb7-b435-4a06-8a6d-7bcc4afda383" containerName="registry" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.961274 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="a8221fb7-b435-4a06-8a6d-7bcc4afda383" containerName="registry" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.978392 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.986075 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-9xggs\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.986490 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.986724 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:39:37 crc kubenswrapper[5099]: I1212 15:39:37.988099 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9"] Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.076980 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d97f644-c11d-4c4e-b87a-90fb360a2e28-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qxcv9\" (UID: \"3d97f644-c11d-4c4e-b87a-90fb360a2e28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.077318 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84p7h\" (UniqueName: \"kubernetes.io/projected/3d97f644-c11d-4c4e-b87a-90fb360a2e28-kube-api-access-84p7h\") pod \"cert-manager-operator-controller-manager-64c74584c4-qxcv9\" (UID: \"3d97f644-c11d-4c4e-b87a-90fb360a2e28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.178964 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d97f644-c11d-4c4e-b87a-90fb360a2e28-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qxcv9\" (UID: \"3d97f644-c11d-4c4e-b87a-90fb360a2e28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.179094 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-84p7h\" (UniqueName: \"kubernetes.io/projected/3d97f644-c11d-4c4e-b87a-90fb360a2e28-kube-api-access-84p7h\") pod \"cert-manager-operator-controller-manager-64c74584c4-qxcv9\" (UID: \"3d97f644-c11d-4c4e-b87a-90fb360a2e28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.179963 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d97f644-c11d-4c4e-b87a-90fb360a2e28-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qxcv9\" (UID: \"3d97f644-c11d-4c4e-b87a-90fb360a2e28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.203698 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-84p7h\" (UniqueName: \"kubernetes.io/projected/3d97f644-c11d-4c4e-b87a-90fb360a2e28-kube-api-access-84p7h\") pod \"cert-manager-operator-controller-manager-64c74584c4-qxcv9\" (UID: \"3d97f644-c11d-4c4e-b87a-90fb360a2e28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.307763 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.475773 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8221fb7-b435-4a06-8a6d-7bcc4afda383" path="/var/lib/kubelet/pods/a8221fb7-b435-4a06-8a6d-7bcc4afda383/volumes" Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.532267 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9"] Dec 12 15:39:38 crc kubenswrapper[5099]: W1212 15:39:38.545730 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d97f644_c11d_4c4e_b87a_90fb360a2e28.slice/crio-e4bad1d3aa6e6d8162cf71e0c03133c879bf5f127cd220062406128d6627c5a1 WatchSource:0}: Error finding container e4bad1d3aa6e6d8162cf71e0c03133c879bf5f127cd220062406128d6627c5a1: Status 404 returned error can't find the container with id e4bad1d3aa6e6d8162cf71e0c03133c879bf5f127cd220062406128d6627c5a1 Dec 12 15:39:38 crc kubenswrapper[5099]: I1212 15:39:38.871859 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" event={"ID":"3d97f644-c11d-4c4e-b87a-90fb360a2e28","Type":"ContainerStarted","Data":"e4bad1d3aa6e6d8162cf71e0c03133c879bf5f127cd220062406128d6627c5a1"} Dec 12 15:39:40 crc kubenswrapper[5099]: I1212 15:39:40.705829 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-vv99b" Dec 12 15:39:55 crc kubenswrapper[5099]: I1212 15:39:55.199521 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4120dd22-dad3-4f57-b013-2ca0069cc8e6","Type":"ContainerStarted","Data":"ebcefe46779a2b2e4aed08ee872135c2cb96c09ee7ea1efdfc44195055869b5e"} Dec 12 15:39:55 crc kubenswrapper[5099]: I1212 15:39:55.201440 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" event={"ID":"3d97f644-c11d-4c4e-b87a-90fb360a2e28","Type":"ContainerStarted","Data":"0a62362dc1e97905eec4e44f546114bc3143b85df2093d93dc2e22a62712b924"} Dec 12 15:39:55 crc kubenswrapper[5099]: I1212 15:39:55.250045 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qxcv9" podStartSLOduration=2.901633945 podStartE2EDuration="18.25002037s" podCreationTimestamp="2025-12-12 15:39:37 +0000 UTC" firstStartedPulling="2025-12-12 15:39:38.556134435 +0000 UTC m=+1116.660043096" lastFinishedPulling="2025-12-12 15:39:53.90452088 +0000 UTC m=+1132.008429521" observedRunningTime="2025-12-12 15:39:55.249105576 +0000 UTC m=+1133.353014227" watchObservedRunningTime="2025-12-12 15:39:55.25002037 +0000 UTC m=+1133.353929011" Dec 12 15:39:55 crc kubenswrapper[5099]: I1212 15:39:55.560199 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:39:55 crc kubenswrapper[5099]: I1212 15:39:55.591718 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:39:57 crc kubenswrapper[5099]: I1212 15:39:57.278164 5099 generic.go:358] "Generic (PLEG): container finished" podID="4120dd22-dad3-4f57-b013-2ca0069cc8e6" containerID="ebcefe46779a2b2e4aed08ee872135c2cb96c09ee7ea1efdfc44195055869b5e" exitCode=0 Dec 12 15:39:57 crc kubenswrapper[5099]: I1212 15:39:57.278274 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4120dd22-dad3-4f57-b013-2ca0069cc8e6","Type":"ContainerDied","Data":"ebcefe46779a2b2e4aed08ee872135c2cb96c09ee7ea1efdfc44195055869b5e"} Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.286975 5099 generic.go:358] "Generic (PLEG): container finished" podID="4120dd22-dad3-4f57-b013-2ca0069cc8e6" containerID="f44d02d266191098a9b91199c7482005aaddab7e55f068ff7fdc620282fbdd83" exitCode=0 Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.287037 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4120dd22-dad3-4f57-b013-2ca0069cc8e6","Type":"ContainerDied","Data":"f44d02d266191098a9b91199c7482005aaddab7e55f068ff7fdc620282fbdd83"} Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.707945 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-qzq58"] Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.719171 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-qzq58"] Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.719354 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.721956 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.722326 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.727546 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-5mh78\"" Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.794063 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djt56\" (UniqueName: \"kubernetes.io/projected/b67f4bf2-6832-44e9-83b2-809ae332f36f-kube-api-access-djt56\") pod \"cert-manager-webhook-7894b5b9b4-qzq58\" (UID: \"b67f4bf2-6832-44e9-83b2-809ae332f36f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.794114 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b67f4bf2-6832-44e9-83b2-809ae332f36f-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-qzq58\" (UID: \"b67f4bf2-6832-44e9-83b2-809ae332f36f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.895436 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-djt56\" (UniqueName: \"kubernetes.io/projected/b67f4bf2-6832-44e9-83b2-809ae332f36f-kube-api-access-djt56\") pod \"cert-manager-webhook-7894b5b9b4-qzq58\" (UID: \"b67f4bf2-6832-44e9-83b2-809ae332f36f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:39:58 crc kubenswrapper[5099]: I1212 15:39:58.895920 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b67f4bf2-6832-44e9-83b2-809ae332f36f-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-qzq58\" (UID: \"b67f4bf2-6832-44e9-83b2-809ae332f36f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:39:59 crc kubenswrapper[5099]: I1212 15:39:59.022165 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b67f4bf2-6832-44e9-83b2-809ae332f36f-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-qzq58\" (UID: \"b67f4bf2-6832-44e9-83b2-809ae332f36f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:39:59 crc kubenswrapper[5099]: I1212 15:39:59.022652 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-djt56\" (UniqueName: \"kubernetes.io/projected/b67f4bf2-6832-44e9-83b2-809ae332f36f-kube-api-access-djt56\") pod \"cert-manager-webhook-7894b5b9b4-qzq58\" (UID: \"b67f4bf2-6832-44e9-83b2-809ae332f36f\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:39:59 crc kubenswrapper[5099]: I1212 15:39:59.036866 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:39:59 crc kubenswrapper[5099]: I1212 15:39:59.279484 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-qzq58"] Dec 12 15:39:59 crc kubenswrapper[5099]: I1212 15:39:59.298951 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" event={"ID":"b67f4bf2-6832-44e9-83b2-809ae332f36f","Type":"ContainerStarted","Data":"c1c041273334ec50ad57fbc8dc8954f5b9268b9b769cad46955357de9c9764de"} Dec 12 15:39:59 crc kubenswrapper[5099]: I1212 15:39:59.302466 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4120dd22-dad3-4f57-b013-2ca0069cc8e6","Type":"ContainerStarted","Data":"41b536da672cbe3efd6daf10ce9b5cc654b42c7fb6e5dd96f1c97937c6d5c39f"} Dec 12 15:39:59 crc kubenswrapper[5099]: I1212 15:39:59.302788 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:39:59 crc kubenswrapper[5099]: I1212 15:39:59.331041 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=7.681359437 podStartE2EDuration="24.331022556s" podCreationTimestamp="2025-12-12 15:39:35 +0000 UTC" firstStartedPulling="2025-12-12 15:39:37.697628198 +0000 UTC m=+1115.801536839" lastFinishedPulling="2025-12-12 15:39:54.347291317 +0000 UTC m=+1132.451199958" observedRunningTime="2025-12-12 15:39:59.327604589 +0000 UTC m=+1137.431513230" watchObservedRunningTime="2025-12-12 15:39:59.331022556 +0000 UTC m=+1137.434931197" Dec 12 15:40:03 crc kubenswrapper[5099]: I1212 15:40:03.939432 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg"] Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.035550 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg"] Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.035707 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.037913 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-7xkj6\"" Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.049721 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zctv\" (UniqueName: \"kubernetes.io/projected/59c36d54-f6ac-47cf-9474-9f2dcc9e4af5-kube-api-access-2zctv\") pod \"cert-manager-cainjector-7dbf76d5c8-49kmg\" (UID: \"59c36d54-f6ac-47cf-9474-9f2dcc9e4af5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.049800 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/59c36d54-f6ac-47cf-9474-9f2dcc9e4af5-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-49kmg\" (UID: \"59c36d54-f6ac-47cf-9474-9f2dcc9e4af5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.150831 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/59c36d54-f6ac-47cf-9474-9f2dcc9e4af5-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-49kmg\" (UID: \"59c36d54-f6ac-47cf-9474-9f2dcc9e4af5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.150916 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zctv\" (UniqueName: \"kubernetes.io/projected/59c36d54-f6ac-47cf-9474-9f2dcc9e4af5-kube-api-access-2zctv\") pod \"cert-manager-cainjector-7dbf76d5c8-49kmg\" (UID: \"59c36d54-f6ac-47cf-9474-9f2dcc9e4af5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.170546 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zctv\" (UniqueName: \"kubernetes.io/projected/59c36d54-f6ac-47cf-9474-9f2dcc9e4af5-kube-api-access-2zctv\") pod \"cert-manager-cainjector-7dbf76d5c8-49kmg\" (UID: \"59c36d54-f6ac-47cf-9474-9f2dcc9e4af5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.175492 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/59c36d54-f6ac-47cf-9474-9f2dcc9e4af5-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-49kmg\" (UID: \"59c36d54-f6ac-47cf-9474-9f2dcc9e4af5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" Dec 12 15:40:04 crc kubenswrapper[5099]: I1212 15:40:04.386113 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" Dec 12 15:40:08 crc kubenswrapper[5099]: I1212 15:40:08.035042 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg"] Dec 12 15:40:08 crc kubenswrapper[5099]: I1212 15:40:08.423429 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" event={"ID":"59c36d54-f6ac-47cf-9474-9f2dcc9e4af5","Type":"ContainerStarted","Data":"38fa56d093cf01b970b5ee637c193085fb113ced36ef12c19c8a2181df42d22d"} Dec 12 15:40:08 crc kubenswrapper[5099]: I1212 15:40:08.423482 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" event={"ID":"59c36d54-f6ac-47cf-9474-9f2dcc9e4af5","Type":"ContainerStarted","Data":"5d3053d692f3d8a37fb5abb611ffc262757ca429c5948461f9b447eff872da55"} Dec 12 15:40:08 crc kubenswrapper[5099]: I1212 15:40:08.425382 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" event={"ID":"b67f4bf2-6832-44e9-83b2-809ae332f36f","Type":"ContainerStarted","Data":"901b26f156b79c15b89e7cfcda0041c3c3bdbb62aa13f7ca66b5a334b63f9eee"} Dec 12 15:40:08 crc kubenswrapper[5099]: I1212 15:40:08.425559 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:40:08 crc kubenswrapper[5099]: I1212 15:40:08.443336 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-49kmg" podStartSLOduration=5.443313784 podStartE2EDuration="5.443313784s" podCreationTimestamp="2025-12-12 15:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:40:08.438799119 +0000 UTC m=+1146.542707760" watchObservedRunningTime="2025-12-12 15:40:08.443313784 +0000 UTC m=+1146.547222425" Dec 12 15:40:08 crc kubenswrapper[5099]: I1212 15:40:08.458405 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" podStartSLOduration=1.864865484 podStartE2EDuration="10.458380338s" podCreationTimestamp="2025-12-12 15:39:58 +0000 UTC" firstStartedPulling="2025-12-12 15:39:59.291591052 +0000 UTC m=+1137.395499693" lastFinishedPulling="2025-12-12 15:40:07.885105906 +0000 UTC m=+1145.989014547" observedRunningTime="2025-12-12 15:40:08.453903564 +0000 UTC m=+1146.557812215" watchObservedRunningTime="2025-12-12 15:40:08.458380338 +0000 UTC m=+1146.562288979" Dec 12 15:40:10 crc kubenswrapper[5099]: I1212 15:40:10.393369 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="4120dd22-dad3-4f57-b013-2ca0069cc8e6" containerName="elasticsearch" probeResult="failure" output=< Dec 12 15:40:10 crc kubenswrapper[5099]: {"timestamp": "2025-12-12T15:40:10+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 15:40:10 crc kubenswrapper[5099]: > Dec 12 15:40:14 crc kubenswrapper[5099]: I1212 15:40:14.484777 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-qzq58" Dec 12 15:40:15 crc kubenswrapper[5099]: I1212 15:40:15.432575 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="4120dd22-dad3-4f57-b013-2ca0069cc8e6" containerName="elasticsearch" probeResult="failure" output=< Dec 12 15:40:15 crc kubenswrapper[5099]: {"timestamp": "2025-12-12T15:40:15+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 15:40:15 crc kubenswrapper[5099]: > Dec 12 15:40:16 crc kubenswrapper[5099]: I1212 15:40:16.515530 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:40:16 crc kubenswrapper[5099]: I1212 15:40:16.515642 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:40:16 crc kubenswrapper[5099]: I1212 15:40:16.989133 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-79ch5"] Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.188734 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-79ch5"] Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.188984 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-79ch5" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.193460 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-jdmk7\"" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.262012 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e89cee3a-8a4c-4034-9555-6b804b716bae-bound-sa-token\") pod \"cert-manager-858d87f86b-79ch5\" (UID: \"e89cee3a-8a4c-4034-9555-6b804b716bae\") " pod="cert-manager/cert-manager-858d87f86b-79ch5" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.262117 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nskjm\" (UniqueName: \"kubernetes.io/projected/e89cee3a-8a4c-4034-9555-6b804b716bae-kube-api-access-nskjm\") pod \"cert-manager-858d87f86b-79ch5\" (UID: \"e89cee3a-8a4c-4034-9555-6b804b716bae\") " pod="cert-manager/cert-manager-858d87f86b-79ch5" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.363915 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e89cee3a-8a4c-4034-9555-6b804b716bae-bound-sa-token\") pod \"cert-manager-858d87f86b-79ch5\" (UID: \"e89cee3a-8a4c-4034-9555-6b804b716bae\") " pod="cert-manager/cert-manager-858d87f86b-79ch5" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.364267 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nskjm\" (UniqueName: \"kubernetes.io/projected/e89cee3a-8a4c-4034-9555-6b804b716bae-kube-api-access-nskjm\") pod \"cert-manager-858d87f86b-79ch5\" (UID: \"e89cee3a-8a4c-4034-9555-6b804b716bae\") " pod="cert-manager/cert-manager-858d87f86b-79ch5" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.392879 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e89cee3a-8a4c-4034-9555-6b804b716bae-bound-sa-token\") pod \"cert-manager-858d87f86b-79ch5\" (UID: \"e89cee3a-8a4c-4034-9555-6b804b716bae\") " pod="cert-manager/cert-manager-858d87f86b-79ch5" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.393052 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nskjm\" (UniqueName: \"kubernetes.io/projected/e89cee3a-8a4c-4034-9555-6b804b716bae-kube-api-access-nskjm\") pod \"cert-manager-858d87f86b-79ch5\" (UID: \"e89cee3a-8a4c-4034-9555-6b804b716bae\") " pod="cert-manager/cert-manager-858d87f86b-79ch5" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.512917 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-79ch5" Dec 12 15:40:18 crc kubenswrapper[5099]: I1212 15:40:18.821339 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-79ch5"] Dec 12 15:40:18 crc kubenswrapper[5099]: W1212 15:40:18.831433 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode89cee3a_8a4c_4034_9555_6b804b716bae.slice/crio-7adf2c87ad446549e6f5fc7b527669ea7c23f58449ae1783df42af5150ba61f0 WatchSource:0}: Error finding container 7adf2c87ad446549e6f5fc7b527669ea7c23f58449ae1783df42af5150ba61f0: Status 404 returned error can't find the container with id 7adf2c87ad446549e6f5fc7b527669ea7c23f58449ae1783df42af5150ba61f0 Dec 12 15:40:19 crc kubenswrapper[5099]: I1212 15:40:19.518494 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-79ch5" event={"ID":"e89cee3a-8a4c-4034-9555-6b804b716bae","Type":"ContainerStarted","Data":"9178482825a2418b01a0a549076e9f5a99fe0187df3abac6ea784b89325ade49"} Dec 12 15:40:19 crc kubenswrapper[5099]: I1212 15:40:19.519754 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-79ch5" event={"ID":"e89cee3a-8a4c-4034-9555-6b804b716bae","Type":"ContainerStarted","Data":"7adf2c87ad446549e6f5fc7b527669ea7c23f58449ae1783df42af5150ba61f0"} Dec 12 15:40:19 crc kubenswrapper[5099]: I1212 15:40:19.538168 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-79ch5" podStartSLOduration=3.538140368 podStartE2EDuration="3.538140368s" podCreationTimestamp="2025-12-12 15:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:40:19.533330546 +0000 UTC m=+1157.637239207" watchObservedRunningTime="2025-12-12 15:40:19.538140368 +0000 UTC m=+1157.642049019" Dec 12 15:40:20 crc kubenswrapper[5099]: I1212 15:40:20.848462 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.919158 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.925040 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.927057 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-4z8c7\"" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.927066 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.927751 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.927162 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.935800 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.935846 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.935867 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.935895 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.935912 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.935967 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.935984 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.936007 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.936040 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.936082 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.936143 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfdn9\" (UniqueName: \"kubernetes.io/projected/06b4612a-8dfd-497a-88d5-fc030c5fb47f-kube-api-access-nfdn9\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.936209 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.936249 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.940635 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 15:40:25 crc kubenswrapper[5099]: I1212 15:40:25.945390 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.037955 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038069 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038097 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038138 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038169 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038198 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nfdn9\" (UniqueName: \"kubernetes.io/projected/06b4612a-8dfd-497a-88d5-fc030c5fb47f-kube-api-access-nfdn9\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038231 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038257 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038294 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038310 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038326 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038348 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.038365 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.039101 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.039227 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.039254 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.039365 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.040225 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.040396 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.040436 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.040787 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.042376 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.050427 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.050438 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.050541 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.060770 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfdn9\" (UniqueName: \"kubernetes.io/projected/06b4612a-8dfd-497a-88d5-fc030c5fb47f-kube-api-access-nfdn9\") pod \"service-telemetry-framework-index-1-build\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.307954 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:26 crc kubenswrapper[5099]: I1212 15:40:26.740756 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 15:40:26 crc kubenswrapper[5099]: W1212 15:40:26.744787 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06b4612a_8dfd_497a_88d5_fc030c5fb47f.slice/crio-e01bef5e55a27a923fadebbc89ae0ba46bf409f9cd14e5842197b66c0536b243 WatchSource:0}: Error finding container e01bef5e55a27a923fadebbc89ae0ba46bf409f9cd14e5842197b66c0536b243: Status 404 returned error can't find the container with id e01bef5e55a27a923fadebbc89ae0ba46bf409f9cd14e5842197b66c0536b243 Dec 12 15:40:27 crc kubenswrapper[5099]: I1212 15:40:27.594220 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"06b4612a-8dfd-497a-88d5-fc030c5fb47f","Type":"ContainerStarted","Data":"e01bef5e55a27a923fadebbc89ae0ba46bf409f9cd14e5842197b66c0536b243"} Dec 12 15:40:32 crc kubenswrapper[5099]: I1212 15:40:32.111053 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57788: no serving certificate available for the kubelet" Dec 12 15:40:32 crc kubenswrapper[5099]: I1212 15:40:32.625811 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"06b4612a-8dfd-497a-88d5-fc030c5fb47f","Type":"ContainerStarted","Data":"cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a"} Dec 12 15:40:32 crc kubenswrapper[5099]: I1212 15:40:32.681292 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57790: no serving certificate available for the kubelet" Dec 12 15:40:33 crc kubenswrapper[5099]: I1212 15:40:33.799462 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 15:40:34 crc kubenswrapper[5099]: I1212 15:40:34.641434 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="06b4612a-8dfd-497a-88d5-fc030c5fb47f" containerName="git-clone" containerID="cri-o://cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a" gracePeriod=30 Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.073121 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_06b4612a-8dfd-497a-88d5-fc030c5fb47f/git-clone/0.log" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.073466 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.076821 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-system-configs\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.076886 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.076929 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildworkdir\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077071 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-node-pullsecrets\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077157 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-blob-cache\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077205 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-run\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077305 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-root\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077362 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildcachedir\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077195 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077406 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-pull\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077505 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-proxy-ca-bundles\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077550 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-push\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077571 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077587 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-ca-bundles\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077620 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfdn9\" (UniqueName: \"kubernetes.io/projected/06b4612a-8dfd-497a-88d5-fc030c5fb47f-kube-api-access-nfdn9\") pod \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\" (UID: \"06b4612a-8dfd-497a-88d5-fc030c5fb47f\") " Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077625 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077654 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.077708 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078118 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078149 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078165 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078179 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078194 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078172 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078356 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078583 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.078702 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.083625 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-pull" (OuterVolumeSpecName: "builder-dockercfg-4z8c7-pull") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "builder-dockercfg-4z8c7-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.083941 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.084261 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b4612a-8dfd-497a-88d5-fc030c5fb47f-kube-api-access-nfdn9" (OuterVolumeSpecName: "kube-api-access-nfdn9") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "kube-api-access-nfdn9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.085543 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-push" (OuterVolumeSpecName: "builder-dockercfg-4z8c7-push") pod "06b4612a-8dfd-497a-88d5-fc030c5fb47f" (UID: "06b4612a-8dfd-497a-88d5-fc030c5fb47f"). InnerVolumeSpecName "builder-dockercfg-4z8c7-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.179167 5099 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.179257 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.179277 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.179289 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-pull\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.179312 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.179323 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/06b4612a-8dfd-497a-88d5-fc030c5fb47f-builder-dockercfg-4z8c7-push\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.179341 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06b4612a-8dfd-497a-88d5-fc030c5fb47f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.179353 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nfdn9\" (UniqueName: \"kubernetes.io/projected/06b4612a-8dfd-497a-88d5-fc030c5fb47f-kube-api-access-nfdn9\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.656635 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_06b4612a-8dfd-497a-88d5-fc030c5fb47f/git-clone/0.log" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.657078 5099 generic.go:358] "Generic (PLEG): container finished" podID="06b4612a-8dfd-497a-88d5-fc030c5fb47f" containerID="cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a" exitCode=1 Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.657225 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.657207 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"06b4612a-8dfd-497a-88d5-fc030c5fb47f","Type":"ContainerDied","Data":"cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a"} Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.657382 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"06b4612a-8dfd-497a-88d5-fc030c5fb47f","Type":"ContainerDied","Data":"e01bef5e55a27a923fadebbc89ae0ba46bf409f9cd14e5842197b66c0536b243"} Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.657440 5099 scope.go:117] "RemoveContainer" containerID="cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.684613 5099 scope.go:117] "RemoveContainer" containerID="cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a" Dec 12 15:40:35 crc kubenswrapper[5099]: E1212 15:40:35.685824 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a\": container with ID starting with cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a not found: ID does not exist" containerID="cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.685901 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a"} err="failed to get container status \"cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a\": rpc error: code = NotFound desc = could not find container \"cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a\": container with ID starting with cd0cf866a8399ef5cf5d996aaf594897c2daf03e26e354a2dd5fd39ca1bb604a not found: ID does not exist" Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.702909 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 15:40:35 crc kubenswrapper[5099]: I1212 15:40:35.707696 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 15:40:36 crc kubenswrapper[5099]: I1212 15:40:36.477340 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b4612a-8dfd-497a-88d5-fc030c5fb47f" path="/var/lib/kubelet/pods/06b4612a-8dfd-497a-88d5-fc030c5fb47f/volumes" Dec 12 15:40:43 crc kubenswrapper[5099]: I1212 15:40:43.575607 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43916: no serving certificate available for the kubelet" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.337257 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.338232 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="06b4612a-8dfd-497a-88d5-fc030c5fb47f" containerName="git-clone" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.338252 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b4612a-8dfd-497a-88d5-fc030c5fb47f" containerName="git-clone" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.338378 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="06b4612a-8dfd-497a-88d5-fc030c5fb47f" containerName="git-clone" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.361060 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.361230 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.363510 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-global-ca\"" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.363619 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-ca\"" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.363511 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-sys-config\"" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.363624 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-4z8c7\"" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.363863 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.450419 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.450475 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7m82\" (UniqueName: \"kubernetes.io/projected/11e3a027-4827-45c9-a2d9-ce76ee61796e-kube-api-access-c7m82\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.450502 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.450783 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.450911 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.450948 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.450968 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.450990 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.451030 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.451056 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.451090 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.451181 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.451282 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.552984 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553038 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7m82\" (UniqueName: \"kubernetes.io/projected/11e3a027-4827-45c9-a2d9-ce76ee61796e-kube-api-access-c7m82\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553062 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553315 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553407 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553448 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553470 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553498 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553545 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553597 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553652 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553722 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553786 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.554143 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.554325 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.554524 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.554657 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.553325 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.555013 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.555088 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.556592 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.556902 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.561275 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.561288 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.561966 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.573498 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7m82\" (UniqueName: \"kubernetes.io/projected/11e3a027-4827-45c9-a2d9-ce76ee61796e-kube-api-access-c7m82\") pod \"service-telemetry-framework-index-2-build\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:45 crc kubenswrapper[5099]: I1212 15:40:45.685791 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:46 crc kubenswrapper[5099]: I1212 15:40:46.199054 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 15:40:46 crc kubenswrapper[5099]: W1212 15:40:46.204772 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11e3a027_4827_45c9_a2d9_ce76ee61796e.slice/crio-16ea698d3a628449838620dad19381f1e9e065f3c9a17657855a3569bdf31980 WatchSource:0}: Error finding container 16ea698d3a628449838620dad19381f1e9e065f3c9a17657855a3569bdf31980: Status 404 returned error can't find the container with id 16ea698d3a628449838620dad19381f1e9e065f3c9a17657855a3569bdf31980 Dec 12 15:40:46 crc kubenswrapper[5099]: I1212 15:40:46.515205 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:40:46 crc kubenswrapper[5099]: I1212 15:40:46.515312 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:40:46 crc kubenswrapper[5099]: I1212 15:40:46.753219 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"11e3a027-4827-45c9-a2d9-ce76ee61796e","Type":"ContainerStarted","Data":"5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a"} Dec 12 15:40:46 crc kubenswrapper[5099]: I1212 15:40:46.753589 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"11e3a027-4827-45c9-a2d9-ce76ee61796e","Type":"ContainerStarted","Data":"16ea698d3a628449838620dad19381f1e9e065f3c9a17657855a3569bdf31980"} Dec 12 15:40:46 crc kubenswrapper[5099]: I1212 15:40:46.806081 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43924: no serving certificate available for the kubelet" Dec 12 15:40:47 crc kubenswrapper[5099]: I1212 15:40:47.834803 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 15:40:48 crc kubenswrapper[5099]: I1212 15:40:48.766168 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="11e3a027-4827-45c9-a2d9-ce76ee61796e" containerName="git-clone" containerID="cri-o://5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a" gracePeriod=30 Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.453603 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_11e3a027-4827-45c9-a2d9-ce76ee61796e/git-clone/0.log" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.454090 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.536619 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-node-pullsecrets\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.536751 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-run\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.536762 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.536804 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-ca-bundles\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.536855 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.536888 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildworkdir\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.536910 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-push\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.537784 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-proxy-ca-bundles\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.537184 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.537677 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.537883 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-root\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.537917 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-pull\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.537964 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildcachedir\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.538077 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-system-configs\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.538126 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.538128 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7m82\" (UniqueName: \"kubernetes.io/projected/11e3a027-4827-45c9-a2d9-ce76ee61796e-kube-api-access-c7m82\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.538265 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-blob-cache\") pod \"11e3a027-4827-45c9-a2d9-ce76ee61796e\" (UID: \"11e3a027-4827-45c9-a2d9-ce76ee61796e\") " Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.538280 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.538479 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.538691 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.538718 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.539078 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.539095 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.539106 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.539117 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.539128 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.539138 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.539150 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/11e3a027-4827-45c9-a2d9-ce76ee61796e-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.539161 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/11e3a027-4827-45c9-a2d9-ce76ee61796e-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.544489 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.545256 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-push" (OuterVolumeSpecName: "builder-dockercfg-4z8c7-push") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "builder-dockercfg-4z8c7-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.545279 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.545404 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-pull" (OuterVolumeSpecName: "builder-dockercfg-4z8c7-pull") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "builder-dockercfg-4z8c7-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.560355 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e3a027-4827-45c9-a2d9-ce76ee61796e-kube-api-access-c7m82" (OuterVolumeSpecName: "kube-api-access-c7m82") pod "11e3a027-4827-45c9-a2d9-ce76ee61796e" (UID: "11e3a027-4827-45c9-a2d9-ce76ee61796e"). InnerVolumeSpecName "kube-api-access-c7m82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.640166 5099 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.640214 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-push\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.640229 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/11e3a027-4827-45c9-a2d9-ce76ee61796e-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.640238 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/11e3a027-4827-45c9-a2d9-ce76ee61796e-builder-dockercfg-4z8c7-pull\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.640250 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c7m82\" (UniqueName: \"kubernetes.io/projected/11e3a027-4827-45c9-a2d9-ce76ee61796e-kube-api-access-c7m82\") on node \"crc\" DevicePath \"\"" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.776465 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_11e3a027-4827-45c9-a2d9-ce76ee61796e/git-clone/0.log" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.776542 5099 generic.go:358] "Generic (PLEG): container finished" podID="11e3a027-4827-45c9-a2d9-ce76ee61796e" containerID="5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a" exitCode=1 Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.776634 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"11e3a027-4827-45c9-a2d9-ce76ee61796e","Type":"ContainerDied","Data":"5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a"} Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.776718 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.776709 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"11e3a027-4827-45c9-a2d9-ce76ee61796e","Type":"ContainerDied","Data":"16ea698d3a628449838620dad19381f1e9e065f3c9a17657855a3569bdf31980"} Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.776776 5099 scope.go:117] "RemoveContainer" containerID="5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.891336 5099 scope.go:117] "RemoveContainer" containerID="5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a" Dec 12 15:40:49 crc kubenswrapper[5099]: E1212 15:40:49.893622 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a\": container with ID starting with 5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a not found: ID does not exist" containerID="5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.893733 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a"} err="failed to get container status \"5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a\": rpc error: code = NotFound desc = could not find container \"5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a\": container with ID starting with 5332b15f88f71602beb3c71c60e7608233f91048e47678a5477aabd13a43778a not found: ID does not exist" Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.910712 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 15:40:49 crc kubenswrapper[5099]: I1212 15:40:49.916971 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 15:40:50 crc kubenswrapper[5099]: I1212 15:40:50.474863 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e3a027-4827-45c9-a2d9-ce76ee61796e" path="/var/lib/kubelet/pods/11e3a027-4827-45c9-a2d9-ce76ee61796e/volumes" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.296565 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.297843 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11e3a027-4827-45c9-a2d9-ce76ee61796e" containerName="git-clone" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.297865 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e3a027-4827-45c9-a2d9-ce76ee61796e" containerName="git-clone" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.298056 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="11e3a027-4827-45c9-a2d9-ce76ee61796e" containerName="git-clone" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.325317 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.325476 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.329012 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-global-ca\"" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.332407 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.332742 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-ca\"" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.333045 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-sys-config\"" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.335099 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-4z8c7\"" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.399546 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400068 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400112 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400148 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400323 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8tpb\" (UniqueName: \"kubernetes.io/projected/1d642475-26f5-4f0e-930f-be0ad3ca6292-kube-api-access-p8tpb\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400481 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400556 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400634 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400869 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.400958 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.401078 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.401111 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.401141 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503407 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8tpb\" (UniqueName: \"kubernetes.io/projected/1d642475-26f5-4f0e-930f-be0ad3ca6292-kube-api-access-p8tpb\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503502 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503553 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503588 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503747 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503780 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503832 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503863 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503892 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503934 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503967 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.503997 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.504036 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.504384 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.504511 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.504542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.504803 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.504845 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.504956 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.505107 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.505249 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.505326 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.510470 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.511265 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.520243 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.521991 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8tpb\" (UniqueName: \"kubernetes.io/projected/1d642475-26f5-4f0e-930f-be0ad3ca6292-kube-api-access-p8tpb\") pod \"service-telemetry-framework-index-3-build\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.646893 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:40:59 crc kubenswrapper[5099]: I1212 15:40:59.900955 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 15:41:00 crc kubenswrapper[5099]: I1212 15:41:00.847309 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"1d642475-26f5-4f0e-930f-be0ad3ca6292","Type":"ContainerStarted","Data":"71f811b0a16da1eb24771357cf58ecf0b97899c64edeb1abe4e5fbe1ac465a9f"} Dec 12 15:41:00 crc kubenswrapper[5099]: I1212 15:41:00.847703 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"1d642475-26f5-4f0e-930f-be0ad3ca6292","Type":"ContainerStarted","Data":"f34e091ebb1c0975c8450dd30fa833ac7a603a7f716831d959a7eff4e27187bd"} Dec 12 15:41:00 crc kubenswrapper[5099]: I1212 15:41:00.896264 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58622: no serving certificate available for the kubelet" Dec 12 15:41:01 crc kubenswrapper[5099]: I1212 15:41:01.927719 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 15:41:02 crc kubenswrapper[5099]: I1212 15:41:02.911696 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="1d642475-26f5-4f0e-930f-be0ad3ca6292" containerName="git-clone" containerID="cri-o://71f811b0a16da1eb24771357cf58ecf0b97899c64edeb1abe4e5fbe1ac465a9f" gracePeriod=30 Dec 12 15:41:02 crc kubenswrapper[5099]: I1212 15:41:02.947573 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:41:02 crc kubenswrapper[5099]: I1212 15:41:02.949760 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:41:02 crc kubenswrapper[5099]: I1212 15:41:02.952853 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:41:02 crc kubenswrapper[5099]: I1212 15:41:02.955731 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:41:02 crc kubenswrapper[5099]: I1212 15:41:02.959593 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:41:02 crc kubenswrapper[5099]: I1212 15:41:02.963592 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:41:03 crc kubenswrapper[5099]: I1212 15:41:03.965694 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_1d642475-26f5-4f0e-930f-be0ad3ca6292/git-clone/0.log" Dec 12 15:41:03 crc kubenswrapper[5099]: I1212 15:41:03.965741 5099 generic.go:358] "Generic (PLEG): container finished" podID="1d642475-26f5-4f0e-930f-be0ad3ca6292" containerID="71f811b0a16da1eb24771357cf58ecf0b97899c64edeb1abe4e5fbe1ac465a9f" exitCode=1 Dec 12 15:41:03 crc kubenswrapper[5099]: I1212 15:41:03.965792 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"1d642475-26f5-4f0e-930f-be0ad3ca6292","Type":"ContainerDied","Data":"71f811b0a16da1eb24771357cf58ecf0b97899c64edeb1abe4e5fbe1ac465a9f"} Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.827169 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_1d642475-26f5-4f0e-930f-be0ad3ca6292/git-clone/0.log" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.827917 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912067 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912148 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-pull\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912222 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8tpb\" (UniqueName: \"kubernetes.io/projected/1d642475-26f5-4f0e-930f-be0ad3ca6292-kube-api-access-p8tpb\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912247 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildworkdir\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912275 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-proxy-ca-bundles\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912290 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-push\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912322 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-root\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912383 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-ca-bundles\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912410 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildcachedir\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912429 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-system-configs\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912462 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-node-pullsecrets\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912501 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-blob-cache\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.912525 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-run\") pod \"1d642475-26f5-4f0e-930f-be0ad3ca6292\" (UID: \"1d642475-26f5-4f0e-930f-be0ad3ca6292\") " Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.913103 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.913102 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.913482 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.913563 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.913601 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.913566 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.914064 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.914229 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.914298 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.919218 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-pull" (OuterVolumeSpecName: "builder-dockercfg-4z8c7-pull") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "builder-dockercfg-4z8c7-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.919433 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.919454 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-push" (OuterVolumeSpecName: "builder-dockercfg-4z8c7-push") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "builder-dockercfg-4z8c7-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.919685 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d642475-26f5-4f0e-930f-be0ad3ca6292-kube-api-access-p8tpb" (OuterVolumeSpecName: "kube-api-access-p8tpb") pod "1d642475-26f5-4f0e-930f-be0ad3ca6292" (UID: "1d642475-26f5-4f0e-930f-be0ad3ca6292"). InnerVolumeSpecName "kube-api-access-p8tpb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.993238 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_1d642475-26f5-4f0e-930f-be0ad3ca6292/git-clone/0.log" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.993339 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"1d642475-26f5-4f0e-930f-be0ad3ca6292","Type":"ContainerDied","Data":"f34e091ebb1c0975c8450dd30fa833ac7a603a7f716831d959a7eff4e27187bd"} Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.993391 5099 scope.go:117] "RemoveContainer" containerID="71f811b0a16da1eb24771357cf58ecf0b97899c64edeb1abe4e5fbe1ac465a9f" Dec 12 15:41:07 crc kubenswrapper[5099]: I1212 15:41:07.993542 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013490 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013520 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013529 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013543 5099 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013577 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-pull\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013592 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8tpb\" (UniqueName: \"kubernetes.io/projected/1d642475-26f5-4f0e-930f-be0ad3ca6292-kube-api-access-p8tpb\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013601 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013609 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013617 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/1d642475-26f5-4f0e-930f-be0ad3ca6292-builder-dockercfg-4z8c7-push\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013626 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1d642475-26f5-4f0e-930f-be0ad3ca6292-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013634 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013648 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1d642475-26f5-4f0e-930f-be0ad3ca6292-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.013683 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1d642475-26f5-4f0e-930f-be0ad3ca6292-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.025728 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.033952 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 15:41:08 crc kubenswrapper[5099]: I1212 15:41:08.475829 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d642475-26f5-4f0e-930f-be0ad3ca6292" path="/var/lib/kubelet/pods/1d642475-26f5-4f0e-930f-be0ad3ca6292/volumes" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.534256 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.535399 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d642475-26f5-4f0e-930f-be0ad3ca6292" containerName="git-clone" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.535418 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d642475-26f5-4f0e-930f-be0ad3ca6292" containerName="git-clone" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.535531 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d642475-26f5-4f0e-930f-be0ad3ca6292" containerName="git-clone" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.916435 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.916785 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.920738 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-ca\"" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.921610 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-global-ca\"" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.921653 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-4z8c7\"" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.923618 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.923901 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-sys-config\"" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966064 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966125 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966169 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966218 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966258 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966285 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966329 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966387 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966730 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtbm\" (UniqueName: \"kubernetes.io/projected/f48b3be1-a594-47c1-9886-90f18df0f912-kube-api-access-gjtbm\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.966887 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.967030 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.967079 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:13 crc kubenswrapper[5099]: I1212 15:41:13.967250 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.068469 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.068543 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.068575 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.068779 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.068878 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.068924 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.068961 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.069000 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.069271 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.069300 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.069342 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.069396 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.070382 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.070861 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.070108 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.071098 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.071158 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.071858 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.071956 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.071968 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.072076 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gjtbm\" (UniqueName: \"kubernetes.io/projected/f48b3be1-a594-47c1-9886-90f18df0f912-kube-api-access-gjtbm\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.073048 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.076460 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.077153 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.086491 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.089727 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjtbm\" (UniqueName: \"kubernetes.io/projected/f48b3be1-a594-47c1-9886-90f18df0f912-kube-api-access-gjtbm\") pod \"service-telemetry-framework-index-4-build\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.257296 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:14 crc kubenswrapper[5099]: I1212 15:41:14.481186 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 15:41:15 crc kubenswrapper[5099]: I1212 15:41:15.080955 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"f48b3be1-a594-47c1-9886-90f18df0f912","Type":"ContainerStarted","Data":"6242a599787295b08d71d53fe9360d1908325f8aeab4c4b89db1857bb2a1dc8d"} Dec 12 15:41:15 crc kubenswrapper[5099]: I1212 15:41:15.081431 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"f48b3be1-a594-47c1-9886-90f18df0f912","Type":"ContainerStarted","Data":"f005d446cfd9f2bbcadef3f448e8f37639b0a3e42768ba612ab4ceaef4a318be"} Dec 12 15:41:16 crc kubenswrapper[5099]: I1212 15:41:16.144244 5099 ???:1] "http: TLS handshake error from 192.168.126.11:59402: no serving certificate available for the kubelet" Dec 12 15:41:16 crc kubenswrapper[5099]: I1212 15:41:16.515480 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:41:16 crc kubenswrapper[5099]: I1212 15:41:16.515576 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:41:16 crc kubenswrapper[5099]: I1212 15:41:16.515642 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:41:16 crc kubenswrapper[5099]: I1212 15:41:16.516385 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e46f729345ce4b4127ff28338116252ec1967c1ded006dff545ba615b93a08c0"} pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:41:16 crc kubenswrapper[5099]: I1212 15:41:16.516499 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" containerID="cri-o://e46f729345ce4b4127ff28338116252ec1967c1ded006dff545ba615b93a08c0" gracePeriod=600 Dec 12 15:41:17 crc kubenswrapper[5099]: I1212 15:41:17.100213 5099 generic.go:358] "Generic (PLEG): container finished" podID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerID="e46f729345ce4b4127ff28338116252ec1967c1ded006dff545ba615b93a08c0" exitCode=0 Dec 12 15:41:17 crc kubenswrapper[5099]: I1212 15:41:17.100274 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerDied","Data":"e46f729345ce4b4127ff28338116252ec1967c1ded006dff545ba615b93a08c0"} Dec 12 15:41:17 crc kubenswrapper[5099]: I1212 15:41:17.100370 5099 scope.go:117] "RemoveContainer" containerID="3bc47be49afe36eec207faef72696c0e0a3816019790ed023f832a2c46d18ea7" Dec 12 15:41:17 crc kubenswrapper[5099]: I1212 15:41:17.269138 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 15:41:17 crc kubenswrapper[5099]: I1212 15:41:17.269452 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="f48b3be1-a594-47c1-9886-90f18df0f912" containerName="git-clone" containerID="cri-o://6242a599787295b08d71d53fe9360d1908325f8aeab4c4b89db1857bb2a1dc8d" gracePeriod=30 Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.112644 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"2157bee565804156094f00eeea4b4f626827f01a14b906182e70927c4bff20c1"} Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.122801 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_f48b3be1-a594-47c1-9886-90f18df0f912/git-clone/0.log" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.122878 5099 generic.go:358] "Generic (PLEG): container finished" podID="f48b3be1-a594-47c1-9886-90f18df0f912" containerID="6242a599787295b08d71d53fe9360d1908325f8aeab4c4b89db1857bb2a1dc8d" exitCode=1 Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.123010 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"f48b3be1-a594-47c1-9886-90f18df0f912","Type":"ContainerDied","Data":"6242a599787295b08d71d53fe9360d1908325f8aeab4c4b89db1857bb2a1dc8d"} Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.234847 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_f48b3be1-a594-47c1-9886-90f18df0f912/git-clone/0.log" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.234953 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.278803 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-proxy-ca-bundles\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.278869 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-pull\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.278917 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-ca-bundles\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.278950 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279000 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-root\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279406 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279516 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-run\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279571 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-node-pullsecrets\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279611 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-build-blob-cache\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279729 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-system-configs\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279754 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjtbm\" (UniqueName: \"kubernetes.io/projected/f48b3be1-a594-47c1-9886-90f18df0f912-kube-api-access-gjtbm\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279781 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-buildcachedir\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279808 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-buildworkdir\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279849 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-push\") pod \"f48b3be1-a594-47c1-9886-90f18df0f912\" (UID: \"f48b3be1-a594-47c1-9886-90f18df0f912\") " Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.280117 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279771 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.279866 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.280022 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.280039 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.280076 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.280113 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.280438 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.280492 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.286284 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.286619 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48b3be1-a594-47c1-9886-90f18df0f912-kube-api-access-gjtbm" (OuterVolumeSpecName: "kube-api-access-gjtbm") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "kube-api-access-gjtbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.286653 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-pull" (OuterVolumeSpecName: "builder-dockercfg-4z8c7-pull") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "builder-dockercfg-4z8c7-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.286822 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-push" (OuterVolumeSpecName: "builder-dockercfg-4z8c7-push") pod "f48b3be1-a594-47c1-9886-90f18df0f912" (UID: "f48b3be1-a594-47c1-9886-90f18df0f912"). InnerVolumeSpecName "builder-dockercfg-4z8c7-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384686 5099 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384725 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gjtbm\" (UniqueName: \"kubernetes.io/projected/f48b3be1-a594-47c1-9886-90f18df0f912-kube-api-access-gjtbm\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384736 5099 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384754 5099 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384763 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4z8c7-push\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-push\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384772 5099 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384780 5099 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-4z8c7-pull\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-builder-dockercfg-4z8c7-pull\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384788 5099 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f48b3be1-a594-47c1-9886-90f18df0f912-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384797 5099 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f48b3be1-a594-47c1-9886-90f18df0f912-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384809 5099 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384820 5099 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f48b3be1-a594-47c1-9886-90f18df0f912-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:18 crc kubenswrapper[5099]: I1212 15:41:18.384828 5099 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f48b3be1-a594-47c1-9886-90f18df0f912-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:19 crc kubenswrapper[5099]: I1212 15:41:19.131281 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_f48b3be1-a594-47c1-9886-90f18df0f912/git-clone/0.log" Dec 12 15:41:19 crc kubenswrapper[5099]: I1212 15:41:19.131786 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"f48b3be1-a594-47c1-9886-90f18df0f912","Type":"ContainerDied","Data":"f005d446cfd9f2bbcadef3f448e8f37639b0a3e42768ba612ab4ceaef4a318be"} Dec 12 15:41:19 crc kubenswrapper[5099]: I1212 15:41:19.131814 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 15:41:19 crc kubenswrapper[5099]: I1212 15:41:19.131873 5099 scope.go:117] "RemoveContainer" containerID="6242a599787295b08d71d53fe9360d1908325f8aeab4c4b89db1857bb2a1dc8d" Dec 12 15:41:19 crc kubenswrapper[5099]: I1212 15:41:19.155002 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 15:41:19 crc kubenswrapper[5099]: I1212 15:41:19.163397 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.475005 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48b3be1-a594-47c1-9886-90f18df0f912" path="/var/lib/kubelet/pods/f48b3be1-a594-47c1-9886-90f18df0f912/volumes" Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.723571 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-865jd"] Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.724379 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f48b3be1-a594-47c1-9886-90f18df0f912" containerName="git-clone" Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.724402 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48b3be1-a594-47c1-9886-90f18df0f912" containerName="git-clone" Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.724528 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f48b3be1-a594-47c1-9886-90f18df0f912" containerName="git-clone" Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.732500 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-865jd"] Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.732626 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-865jd" Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.734918 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-xmtv4\"" Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.800541 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qht9\" (UniqueName: \"kubernetes.io/projected/8fa39604-d32c-4dfc-a3ce-b67a1d6c354b-kube-api-access-6qht9\") pod \"infrawatch-operators-865jd\" (UID: \"8fa39604-d32c-4dfc-a3ce-b67a1d6c354b\") " pod="service-telemetry/infrawatch-operators-865jd" Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.901999 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qht9\" (UniqueName: \"kubernetes.io/projected/8fa39604-d32c-4dfc-a3ce-b67a1d6c354b-kube-api-access-6qht9\") pod \"infrawatch-operators-865jd\" (UID: \"8fa39604-d32c-4dfc-a3ce-b67a1d6c354b\") " pod="service-telemetry/infrawatch-operators-865jd" Dec 12 15:41:20 crc kubenswrapper[5099]: I1212 15:41:20.925529 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qht9\" (UniqueName: \"kubernetes.io/projected/8fa39604-d32c-4dfc-a3ce-b67a1d6c354b-kube-api-access-6qht9\") pod \"infrawatch-operators-865jd\" (UID: \"8fa39604-d32c-4dfc-a3ce-b67a1d6c354b\") " pod="service-telemetry/infrawatch-operators-865jd" Dec 12 15:41:21 crc kubenswrapper[5099]: I1212 15:41:21.052370 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-865jd" Dec 12 15:41:21 crc kubenswrapper[5099]: I1212 15:41:21.396927 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-865jd"] Dec 12 15:41:21 crc kubenswrapper[5099]: W1212 15:41:21.407874 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fa39604_d32c_4dfc_a3ce_b67a1d6c354b.slice/crio-ec895a4c4eacc6942a93f32154992215b6522b38b9d80054ec424d35a663c1c7 WatchSource:0}: Error finding container ec895a4c4eacc6942a93f32154992215b6522b38b9d80054ec424d35a663c1c7: Status 404 returned error can't find the container with id ec895a4c4eacc6942a93f32154992215b6522b38b9d80054ec424d35a663c1c7 Dec 12 15:41:21 crc kubenswrapper[5099]: E1212 15:41:21.476368 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:41:21 crc kubenswrapper[5099]: E1212 15:41:21.476714 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qht9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-865jd_service-telemetry(8fa39604-d32c-4dfc-a3ce-b67a1d6c354b): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:41:21 crc kubenswrapper[5099]: E1212 15:41:21.477965 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-865jd" podUID="8fa39604-d32c-4dfc-a3ce-b67a1d6c354b" Dec 12 15:41:22 crc kubenswrapper[5099]: I1212 15:41:22.161001 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-865jd" event={"ID":"8fa39604-d32c-4dfc-a3ce-b67a1d6c354b","Type":"ContainerStarted","Data":"ec895a4c4eacc6942a93f32154992215b6522b38b9d80054ec424d35a663c1c7"} Dec 12 15:41:22 crc kubenswrapper[5099]: E1212 15:41:22.161996 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-865jd" podUID="8fa39604-d32c-4dfc-a3ce-b67a1d6c354b" Dec 12 15:41:23 crc kubenswrapper[5099]: E1212 15:41:23.168931 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-865jd" podUID="8fa39604-d32c-4dfc-a3ce-b67a1d6c354b" Dec 12 15:41:26 crc kubenswrapper[5099]: I1212 15:41:26.512345 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-865jd"] Dec 12 15:41:26 crc kubenswrapper[5099]: I1212 15:41:26.743284 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-865jd" Dec 12 15:41:26 crc kubenswrapper[5099]: I1212 15:41:26.803630 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qht9\" (UniqueName: \"kubernetes.io/projected/8fa39604-d32c-4dfc-a3ce-b67a1d6c354b-kube-api-access-6qht9\") pod \"8fa39604-d32c-4dfc-a3ce-b67a1d6c354b\" (UID: \"8fa39604-d32c-4dfc-a3ce-b67a1d6c354b\") " Dec 12 15:41:26 crc kubenswrapper[5099]: I1212 15:41:26.809846 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa39604-d32c-4dfc-a3ce-b67a1d6c354b-kube-api-access-6qht9" (OuterVolumeSpecName: "kube-api-access-6qht9") pod "8fa39604-d32c-4dfc-a3ce-b67a1d6c354b" (UID: "8fa39604-d32c-4dfc-a3ce-b67a1d6c354b"). InnerVolumeSpecName "kube-api-access-6qht9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:41:26 crc kubenswrapper[5099]: I1212 15:41:26.905714 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6qht9\" (UniqueName: \"kubernetes.io/projected/8fa39604-d32c-4dfc-a3ce-b67a1d6c354b-kube-api-access-6qht9\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.194196 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-865jd" event={"ID":"8fa39604-d32c-4dfc-a3ce-b67a1d6c354b","Type":"ContainerDied","Data":"ec895a4c4eacc6942a93f32154992215b6522b38b9d80054ec424d35a663c1c7"} Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.194284 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-865jd" Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.274782 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-865jd"] Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.286626 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-865jd"] Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.320269 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-42fpw"] Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.366529 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-42fpw"] Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.366732 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-42fpw" Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.369239 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-xmtv4\"" Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.413391 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qb4\" (UniqueName: \"kubernetes.io/projected/362b85ef-f3b4-4656-bd6f-567457c085aa-kube-api-access-w9qb4\") pod \"infrawatch-operators-42fpw\" (UID: \"362b85ef-f3b4-4656-bd6f-567457c085aa\") " pod="service-telemetry/infrawatch-operators-42fpw" Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.514956 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9qb4\" (UniqueName: \"kubernetes.io/projected/362b85ef-f3b4-4656-bd6f-567457c085aa-kube-api-access-w9qb4\") pod \"infrawatch-operators-42fpw\" (UID: \"362b85ef-f3b4-4656-bd6f-567457c085aa\") " pod="service-telemetry/infrawatch-operators-42fpw" Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.533039 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9qb4\" (UniqueName: \"kubernetes.io/projected/362b85ef-f3b4-4656-bd6f-567457c085aa-kube-api-access-w9qb4\") pod \"infrawatch-operators-42fpw\" (UID: \"362b85ef-f3b4-4656-bd6f-567457c085aa\") " pod="service-telemetry/infrawatch-operators-42fpw" Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.690102 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-42fpw" Dec 12 15:41:27 crc kubenswrapper[5099]: I1212 15:41:27.973235 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-42fpw"] Dec 12 15:41:28 crc kubenswrapper[5099]: E1212 15:41:28.032163 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:41:28 crc kubenswrapper[5099]: E1212 15:41:28.032374 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-42fpw_service-telemetry(362b85ef-f3b4-4656-bd6f-567457c085aa): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:41:28 crc kubenswrapper[5099]: E1212 15:41:28.034380 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:41:28 crc kubenswrapper[5099]: I1212 15:41:28.204320 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-42fpw" event={"ID":"362b85ef-f3b4-4656-bd6f-567457c085aa","Type":"ContainerStarted","Data":"5e499afa1936e1988bbf1b8d68210d7efbe474a2c69e7ff36b15932ef9bdb858"} Dec 12 15:41:28 crc kubenswrapper[5099]: E1212 15:41:28.205184 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:41:28 crc kubenswrapper[5099]: I1212 15:41:28.475372 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa39604-d32c-4dfc-a3ce-b67a1d6c354b" path="/var/lib/kubelet/pods/8fa39604-d32c-4dfc-a3ce-b67a1d6c354b/volumes" Dec 12 15:41:29 crc kubenswrapper[5099]: E1212 15:41:29.232808 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:41:43 crc kubenswrapper[5099]: E1212 15:41:43.551978 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:41:43 crc kubenswrapper[5099]: E1212 15:41:43.552689 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-42fpw_service-telemetry(362b85ef-f3b4-4656-bd6f-567457c085aa): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:41:43 crc kubenswrapper[5099]: E1212 15:41:43.553939 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:41:58 crc kubenswrapper[5099]: E1212 15:41:58.467433 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:42:10 crc kubenswrapper[5099]: E1212 15:42:10.537493 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:42:10 crc kubenswrapper[5099]: E1212 15:42:10.538301 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-42fpw_service-telemetry(362b85ef-f3b4-4656-bd6f-567457c085aa): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:42:10 crc kubenswrapper[5099]: E1212 15:42:10.539554 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:42:24 crc kubenswrapper[5099]: I1212 15:42:24.467813 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:42:24 crc kubenswrapper[5099]: E1212 15:42:24.468679 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:42:35 crc kubenswrapper[5099]: E1212 15:42:35.468155 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:42:48 crc kubenswrapper[5099]: E1212 15:42:48.467594 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:42:59 crc kubenswrapper[5099]: E1212 15:42:59.537256 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:42:59 crc kubenswrapper[5099]: E1212 15:42:59.538552 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-42fpw_service-telemetry(362b85ef-f3b4-4656-bd6f-567457c085aa): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:42:59 crc kubenswrapper[5099]: E1212 15:42:59.540399 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:43:10 crc kubenswrapper[5099]: E1212 15:43:10.500697 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:43:22 crc kubenswrapper[5099]: E1212 15:43:22.474325 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:43:27 crc kubenswrapper[5099]: I1212 15:43:27.503171 5099 ???:1] "http: TLS handshake error from 192.168.126.11:50372: no serving certificate available for the kubelet" Dec 12 15:43:35 crc kubenswrapper[5099]: E1212 15:43:35.467069 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:43:46 crc kubenswrapper[5099]: I1212 15:43:46.519983 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:43:46 crc kubenswrapper[5099]: I1212 15:43:46.520640 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:43:47 crc kubenswrapper[5099]: E1212 15:43:47.469011 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:44:01 crc kubenswrapper[5099]: E1212 15:44:01.467493 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:44:12 crc kubenswrapper[5099]: E1212 15:44:12.473801 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:44:16 crc kubenswrapper[5099]: I1212 15:44:16.515874 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:44:16 crc kubenswrapper[5099]: I1212 15:44:16.516422 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:44:26 crc kubenswrapper[5099]: E1212 15:44:26.530511 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:44:26 crc kubenswrapper[5099]: E1212 15:44:26.531284 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-42fpw_service-telemetry(362b85ef-f3b4-4656-bd6f-567457c085aa): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:44:26 crc kubenswrapper[5099]: E1212 15:44:26.532503 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:44:40 crc kubenswrapper[5099]: E1212 15:44:40.468308 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:44:46 crc kubenswrapper[5099]: I1212 15:44:46.515919 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:44:46 crc kubenswrapper[5099]: I1212 15:44:46.516421 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:44:46 crc kubenswrapper[5099]: I1212 15:44:46.516482 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:44:46 crc kubenswrapper[5099]: I1212 15:44:46.517253 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2157bee565804156094f00eeea4b4f626827f01a14b906182e70927c4bff20c1"} pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:44:46 crc kubenswrapper[5099]: I1212 15:44:46.517343 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" containerID="cri-o://2157bee565804156094f00eeea4b4f626827f01a14b906182e70927c4bff20c1" gracePeriod=600 Dec 12 15:44:47 crc kubenswrapper[5099]: I1212 15:44:47.077534 5099 generic.go:358] "Generic (PLEG): container finished" podID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerID="2157bee565804156094f00eeea4b4f626827f01a14b906182e70927c4bff20c1" exitCode=0 Dec 12 15:44:47 crc kubenswrapper[5099]: I1212 15:44:47.077606 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerDied","Data":"2157bee565804156094f00eeea4b4f626827f01a14b906182e70927c4bff20c1"} Dec 12 15:44:47 crc kubenswrapper[5099]: I1212 15:44:47.078115 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93"} Dec 12 15:44:47 crc kubenswrapper[5099]: I1212 15:44:47.078148 5099 scope.go:117] "RemoveContainer" containerID="e46f729345ce4b4127ff28338116252ec1967c1ded006dff545ba615b93a08c0" Dec 12 15:44:54 crc kubenswrapper[5099]: E1212 15:44:54.467498 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.139320 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w"] Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.608629 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w"] Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.608780 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.612055 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.615206 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.689601 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x2cm\" (UniqueName: \"kubernetes.io/projected/121f54aa-8e28-429e-a4ba-a01038b32269-kube-api-access-2x2cm\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.689955 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121f54aa-8e28-429e-a4ba-a01038b32269-secret-volume\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.690038 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121f54aa-8e28-429e-a4ba-a01038b32269-config-volume\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.791895 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2x2cm\" (UniqueName: \"kubernetes.io/projected/121f54aa-8e28-429e-a4ba-a01038b32269-kube-api-access-2x2cm\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.792009 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121f54aa-8e28-429e-a4ba-a01038b32269-secret-volume\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.792080 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121f54aa-8e28-429e-a4ba-a01038b32269-config-volume\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.793472 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121f54aa-8e28-429e-a4ba-a01038b32269-config-volume\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.914368 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121f54aa-8e28-429e-a4ba-a01038b32269-secret-volume\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.924818 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x2cm\" (UniqueName: \"kubernetes.io/projected/121f54aa-8e28-429e-a4ba-a01038b32269-kube-api-access-2x2cm\") pod \"collect-profiles-29425905-x6b4w\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:00 crc kubenswrapper[5099]: I1212 15:45:00.935212 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.177224 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w"] Dec 12 15:45:01 crc kubenswrapper[5099]: W1212 15:45:01.181571 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod121f54aa_8e28_429e_a4ba_a01038b32269.slice/crio-8711a7a8abe67582ed6c11877bd85498cf1a82c9af0b71dda1c85aa9fdacc17d WatchSource:0}: Error finding container 8711a7a8abe67582ed6c11877bd85498cf1a82c9af0b71dda1c85aa9fdacc17d: Status 404 returned error can't find the container with id 8711a7a8abe67582ed6c11877bd85498cf1a82c9af0b71dda1c85aa9fdacc17d Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.475801 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-htp6w"] Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.606645 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htp6w"] Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.606784 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.626271 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-utilities\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.626327 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-catalog-content\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.626346 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/f9240542-b3b4-4dbc-b482-9d66e55be92c-kube-api-access-t42wq\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.728334 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-utilities\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.728400 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-catalog-content\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.728428 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/f9240542-b3b4-4dbc-b482-9d66e55be92c-kube-api-access-t42wq\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.729491 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-utilities\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.729650 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-catalog-content\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.748416 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/f9240542-b3b4-4dbc-b482-9d66e55be92c-kube-api-access-t42wq\") pod \"community-operators-htp6w\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:01 crc kubenswrapper[5099]: I1212 15:45:01.924835 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:02 crc kubenswrapper[5099]: I1212 15:45:02.191043 5099 generic.go:358] "Generic (PLEG): container finished" podID="121f54aa-8e28-429e-a4ba-a01038b32269" containerID="3f678bc1935e786f966820ca3018e44ee391ba83b949c94c9d9d681b998cad95" exitCode=0 Dec 12 15:45:02 crc kubenswrapper[5099]: I1212 15:45:02.191101 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" event={"ID":"121f54aa-8e28-429e-a4ba-a01038b32269","Type":"ContainerDied","Data":"3f678bc1935e786f966820ca3018e44ee391ba83b949c94c9d9d681b998cad95"} Dec 12 15:45:02 crc kubenswrapper[5099]: I1212 15:45:02.192187 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" event={"ID":"121f54aa-8e28-429e-a4ba-a01038b32269","Type":"ContainerStarted","Data":"8711a7a8abe67582ed6c11877bd85498cf1a82c9af0b71dda1c85aa9fdacc17d"} Dec 12 15:45:02 crc kubenswrapper[5099]: I1212 15:45:02.532330 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htp6w"] Dec 12 15:45:02 crc kubenswrapper[5099]: W1212 15:45:02.600692 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9240542_b3b4_4dbc_b482_9d66e55be92c.slice/crio-3ff49907fc15ab75d56d704c79c76399e157e56f011d97b3ad822994896aee79 WatchSource:0}: Error finding container 3ff49907fc15ab75d56d704c79c76399e157e56f011d97b3ad822994896aee79: Status 404 returned error can't find the container with id 3ff49907fc15ab75d56d704c79c76399e157e56f011d97b3ad822994896aee79 Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.199253 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htp6w" event={"ID":"f9240542-b3b4-4dbc-b482-9d66e55be92c","Type":"ContainerStarted","Data":"3ff49907fc15ab75d56d704c79c76399e157e56f011d97b3ad822994896aee79"} Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.427826 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.510912 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x2cm\" (UniqueName: \"kubernetes.io/projected/121f54aa-8e28-429e-a4ba-a01038b32269-kube-api-access-2x2cm\") pod \"121f54aa-8e28-429e-a4ba-a01038b32269\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.511044 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121f54aa-8e28-429e-a4ba-a01038b32269-config-volume\") pod \"121f54aa-8e28-429e-a4ba-a01038b32269\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.511149 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121f54aa-8e28-429e-a4ba-a01038b32269-secret-volume\") pod \"121f54aa-8e28-429e-a4ba-a01038b32269\" (UID: \"121f54aa-8e28-429e-a4ba-a01038b32269\") " Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.511651 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121f54aa-8e28-429e-a4ba-a01038b32269-config-volume" (OuterVolumeSpecName: "config-volume") pod "121f54aa-8e28-429e-a4ba-a01038b32269" (UID: "121f54aa-8e28-429e-a4ba-a01038b32269"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.517459 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/121f54aa-8e28-429e-a4ba-a01038b32269-kube-api-access-2x2cm" (OuterVolumeSpecName: "kube-api-access-2x2cm") pod "121f54aa-8e28-429e-a4ba-a01038b32269" (UID: "121f54aa-8e28-429e-a4ba-a01038b32269"). InnerVolumeSpecName "kube-api-access-2x2cm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.517718 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/121f54aa-8e28-429e-a4ba-a01038b32269-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "121f54aa-8e28-429e-a4ba-a01038b32269" (UID: "121f54aa-8e28-429e-a4ba-a01038b32269"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.612547 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121f54aa-8e28-429e-a4ba-a01038b32269-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.612579 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2x2cm\" (UniqueName: \"kubernetes.io/projected/121f54aa-8e28-429e-a4ba-a01038b32269-kube-api-access-2x2cm\") on node \"crc\" DevicePath \"\"" Dec 12 15:45:03 crc kubenswrapper[5099]: I1212 15:45:03.612589 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121f54aa-8e28-429e-a4ba-a01038b32269-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:45:04 crc kubenswrapper[5099]: I1212 15:45:04.220297 5099 generic.go:358] "Generic (PLEG): container finished" podID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerID="25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127" exitCode=0 Dec 12 15:45:04 crc kubenswrapper[5099]: I1212 15:45:04.220561 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htp6w" event={"ID":"f9240542-b3b4-4dbc-b482-9d66e55be92c","Type":"ContainerDied","Data":"25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127"} Dec 12 15:45:04 crc kubenswrapper[5099]: I1212 15:45:04.225805 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" event={"ID":"121f54aa-8e28-429e-a4ba-a01038b32269","Type":"ContainerDied","Data":"8711a7a8abe67582ed6c11877bd85498cf1a82c9af0b71dda1c85aa9fdacc17d"} Dec 12 15:45:04 crc kubenswrapper[5099]: I1212 15:45:04.225921 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8711a7a8abe67582ed6c11877bd85498cf1a82c9af0b71dda1c85aa9fdacc17d" Dec 12 15:45:04 crc kubenswrapper[5099]: I1212 15:45:04.225958 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-x6b4w" Dec 12 15:45:06 crc kubenswrapper[5099]: I1212 15:45:06.240546 5099 generic.go:358] "Generic (PLEG): container finished" podID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerID="5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a" exitCode=0 Dec 12 15:45:06 crc kubenswrapper[5099]: I1212 15:45:06.241148 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htp6w" event={"ID":"f9240542-b3b4-4dbc-b482-9d66e55be92c","Type":"ContainerDied","Data":"5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a"} Dec 12 15:45:08 crc kubenswrapper[5099]: I1212 15:45:08.260887 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htp6w" event={"ID":"f9240542-b3b4-4dbc-b482-9d66e55be92c","Type":"ContainerStarted","Data":"95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8"} Dec 12 15:45:08 crc kubenswrapper[5099]: I1212 15:45:08.429112 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-htp6w" podStartSLOduration=6.332933923 podStartE2EDuration="7.429076243s" podCreationTimestamp="2025-12-12 15:45:01 +0000 UTC" firstStartedPulling="2025-12-12 15:45:04.221498629 +0000 UTC m=+1442.325407260" lastFinishedPulling="2025-12-12 15:45:05.317640939 +0000 UTC m=+1443.421549580" observedRunningTime="2025-12-12 15:45:08.426903008 +0000 UTC m=+1446.530811639" watchObservedRunningTime="2025-12-12 15:45:08.429076243 +0000 UTC m=+1446.532984884" Dec 12 15:45:08 crc kubenswrapper[5099]: E1212 15:45:08.467564 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:45:11 crc kubenswrapper[5099]: I1212 15:45:11.980486 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:11 crc kubenswrapper[5099]: I1212 15:45:11.981505 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:12 crc kubenswrapper[5099]: I1212 15:45:12.026457 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:12 crc kubenswrapper[5099]: I1212 15:45:12.323789 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:12 crc kubenswrapper[5099]: I1212 15:45:12.380858 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htp6w"] Dec 12 15:45:14 crc kubenswrapper[5099]: I1212 15:45:14.301696 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-htp6w" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerName="registry-server" containerID="cri-o://95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8" gracePeriod=2 Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.167812 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.208110 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-utilities\") pod \"f9240542-b3b4-4dbc-b482-9d66e55be92c\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.208317 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/f9240542-b3b4-4dbc-b482-9d66e55be92c-kube-api-access-t42wq\") pod \"f9240542-b3b4-4dbc-b482-9d66e55be92c\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.208350 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-catalog-content\") pod \"f9240542-b3b4-4dbc-b482-9d66e55be92c\" (UID: \"f9240542-b3b4-4dbc-b482-9d66e55be92c\") " Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.209951 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-utilities" (OuterVolumeSpecName: "utilities") pod "f9240542-b3b4-4dbc-b482-9d66e55be92c" (UID: "f9240542-b3b4-4dbc-b482-9d66e55be92c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.221016 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9240542-b3b4-4dbc-b482-9d66e55be92c-kube-api-access-t42wq" (OuterVolumeSpecName: "kube-api-access-t42wq") pod "f9240542-b3b4-4dbc-b482-9d66e55be92c" (UID: "f9240542-b3b4-4dbc-b482-9d66e55be92c"). InnerVolumeSpecName "kube-api-access-t42wq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.262429 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9240542-b3b4-4dbc-b482-9d66e55be92c" (UID: "f9240542-b3b4-4dbc-b482-9d66e55be92c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.310979 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.311046 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/f9240542-b3b4-4dbc-b482-9d66e55be92c-kube-api-access-t42wq\") on node \"crc\" DevicePath \"\"" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.311071 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9240542-b3b4-4dbc-b482-9d66e55be92c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.312264 5099 generic.go:358] "Generic (PLEG): container finished" podID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerID="95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8" exitCode=0 Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.312357 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htp6w" event={"ID":"f9240542-b3b4-4dbc-b482-9d66e55be92c","Type":"ContainerDied","Data":"95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8"} Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.312397 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htp6w" event={"ID":"f9240542-b3b4-4dbc-b482-9d66e55be92c","Type":"ContainerDied","Data":"3ff49907fc15ab75d56d704c79c76399e157e56f011d97b3ad822994896aee79"} Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.312456 5099 scope.go:117] "RemoveContainer" containerID="95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.312719 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htp6w" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.347833 5099 scope.go:117] "RemoveContainer" containerID="5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.369466 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htp6w"] Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.372332 5099 scope.go:117] "RemoveContainer" containerID="25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.377923 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-htp6w"] Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.397104 5099 scope.go:117] "RemoveContainer" containerID="95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8" Dec 12 15:45:15 crc kubenswrapper[5099]: E1212 15:45:15.397550 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8\": container with ID starting with 95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8 not found: ID does not exist" containerID="95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.397585 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8"} err="failed to get container status \"95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8\": rpc error: code = NotFound desc = could not find container \"95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8\": container with ID starting with 95b89c597133f17e996fda18a13fd655dcadc5863f1fe7ab6d67454c1528fca8 not found: ID does not exist" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.397608 5099 scope.go:117] "RemoveContainer" containerID="5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a" Dec 12 15:45:15 crc kubenswrapper[5099]: E1212 15:45:15.398042 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a\": container with ID starting with 5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a not found: ID does not exist" containerID="5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.398075 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a"} err="failed to get container status \"5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a\": rpc error: code = NotFound desc = could not find container \"5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a\": container with ID starting with 5028593e84f3440cfe5a718169ceb2c93dd1b6ed29eb14e5fb7f0e31ca6e5b7a not found: ID does not exist" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.398095 5099 scope.go:117] "RemoveContainer" containerID="25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127" Dec 12 15:45:15 crc kubenswrapper[5099]: E1212 15:45:15.398342 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127\": container with ID starting with 25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127 not found: ID does not exist" containerID="25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127" Dec 12 15:45:15 crc kubenswrapper[5099]: I1212 15:45:15.398369 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127"} err="failed to get container status \"25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127\": rpc error: code = NotFound desc = could not find container \"25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127\": container with ID starting with 25fc69a922f8fa2f63645b55db211ca5a64c971abe975a904a60de56ba29c127 not found: ID does not exist" Dec 12 15:45:16 crc kubenswrapper[5099]: I1212 15:45:16.481266 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" path="/var/lib/kubelet/pods/f9240542-b3b4-4dbc-b482-9d66e55be92c/volumes" Dec 12 15:45:19 crc kubenswrapper[5099]: E1212 15:45:19.467524 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:45:30 crc kubenswrapper[5099]: E1212 15:45:30.467044 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:45:42 crc kubenswrapper[5099]: E1212 15:45:42.496809 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:45:53 crc kubenswrapper[5099]: E1212 15:45:53.467336 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:46:03 crc kubenswrapper[5099]: I1212 15:46:03.028992 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:46:03 crc kubenswrapper[5099]: I1212 15:46:03.035742 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:46:03 crc kubenswrapper[5099]: I1212 15:46:03.036850 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:46:03 crc kubenswrapper[5099]: I1212 15:46:03.040868 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:46:03 crc kubenswrapper[5099]: I1212 15:46:03.044857 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:46:03 crc kubenswrapper[5099]: I1212 15:46:03.048246 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:46:08 crc kubenswrapper[5099]: E1212 15:46:08.468252 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:46:22 crc kubenswrapper[5099]: E1212 15:46:22.488765 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.114773 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-w8hnr"] Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116187 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="121f54aa-8e28-429e-a4ba-a01038b32269" containerName="collect-profiles" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116217 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="121f54aa-8e28-429e-a4ba-a01038b32269" containerName="collect-profiles" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116248 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerName="extract-content" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116260 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerName="extract-content" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116286 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerName="registry-server" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116294 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerName="registry-server" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116332 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerName="extract-utilities" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116341 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerName="extract-utilities" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116548 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="121f54aa-8e28-429e-a4ba-a01038b32269" containerName="collect-profiles" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.116580 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f9240542-b3b4-4dbc-b482-9d66e55be92c" containerName="registry-server" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.126061 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-w8hnr"] Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.126211 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-w8hnr" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.227092 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4plbh\" (UniqueName: \"kubernetes.io/projected/911b7734-5d38-4d7f-b31b-a951d59f3fc1-kube-api-access-4plbh\") pod \"infrawatch-operators-w8hnr\" (UID: \"911b7734-5d38-4d7f-b31b-a951d59f3fc1\") " pod="service-telemetry/infrawatch-operators-w8hnr" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.328381 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4plbh\" (UniqueName: \"kubernetes.io/projected/911b7734-5d38-4d7f-b31b-a951d59f3fc1-kube-api-access-4plbh\") pod \"infrawatch-operators-w8hnr\" (UID: \"911b7734-5d38-4d7f-b31b-a951d59f3fc1\") " pod="service-telemetry/infrawatch-operators-w8hnr" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.409651 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4plbh\" (UniqueName: \"kubernetes.io/projected/911b7734-5d38-4d7f-b31b-a951d59f3fc1-kube-api-access-4plbh\") pod \"infrawatch-operators-w8hnr\" (UID: \"911b7734-5d38-4d7f-b31b-a951d59f3fc1\") " pod="service-telemetry/infrawatch-operators-w8hnr" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.461772 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-w8hnr" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.694279 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-w8hnr"] Dec 12 15:46:25 crc kubenswrapper[5099]: E1212 15:46:25.774119 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:46:25 crc kubenswrapper[5099]: E1212 15:46:25.774506 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4plbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-w8hnr_service-telemetry(911b7734-5d38-4d7f-b31b-a951d59f3fc1): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:46:25 crc kubenswrapper[5099]: E1212 15:46:25.776684 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:46:25 crc kubenswrapper[5099]: I1212 15:46:25.927132 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-w8hnr" event={"ID":"911b7734-5d38-4d7f-b31b-a951d59f3fc1","Type":"ContainerStarted","Data":"5603061c9d715fe68ceeca81d3f84a44675c25fc22c25961e87ccefc4608c4aa"} Dec 12 15:46:25 crc kubenswrapper[5099]: E1212 15:46:25.928215 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:46:26 crc kubenswrapper[5099]: E1212 15:46:26.938121 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:46:34 crc kubenswrapper[5099]: E1212 15:46:34.467593 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:46:42 crc kubenswrapper[5099]: E1212 15:46:42.595996 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:46:42 crc kubenswrapper[5099]: E1212 15:46:42.596764 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4plbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-w8hnr_service-telemetry(911b7734-5d38-4d7f-b31b-a951d59f3fc1): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:46:42 crc kubenswrapper[5099]: E1212 15:46:42.598013 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:46:46 crc kubenswrapper[5099]: I1212 15:46:46.515197 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:46:46 crc kubenswrapper[5099]: I1212 15:46:46.515730 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:46:47 crc kubenswrapper[5099]: E1212 15:46:47.468170 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:46:53 crc kubenswrapper[5099]: E1212 15:46:53.468065 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:47:00 crc kubenswrapper[5099]: E1212 15:47:00.467232 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:47:06 crc kubenswrapper[5099]: E1212 15:47:06.533552 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:47:06 crc kubenswrapper[5099]: E1212 15:47:06.535817 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4plbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-w8hnr_service-telemetry(911b7734-5d38-4d7f-b31b-a951d59f3fc1): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:47:06 crc kubenswrapper[5099]: E1212 15:47:06.537155 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:47:15 crc kubenswrapper[5099]: E1212 15:47:15.653222 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:47:15 crc kubenswrapper[5099]: E1212 15:47:15.653771 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-42fpw_service-telemetry(362b85ef-f3b4-4656-bd6f-567457c085aa): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:47:15 crc kubenswrapper[5099]: E1212 15:47:15.655236 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:47:16 crc kubenswrapper[5099]: I1212 15:47:16.515421 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:47:16 crc kubenswrapper[5099]: I1212 15:47:16.515948 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:47:17 crc kubenswrapper[5099]: E1212 15:47:17.468477 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:47:28 crc kubenswrapper[5099]: I1212 15:47:28.468753 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:47:28 crc kubenswrapper[5099]: E1212 15:47:28.470030 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:47:28 crc kubenswrapper[5099]: E1212 15:47:28.470028 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:47:39 crc kubenswrapper[5099]: E1212 15:47:39.468351 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:47:43 crc kubenswrapper[5099]: E1212 15:47:43.470679 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.515334 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.515447 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.515535 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.516313 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93"} pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.516405 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" containerID="cri-o://ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" gracePeriod=600 Dec 12 15:47:46 crc kubenswrapper[5099]: E1212 15:47:46.644395 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.844644 5099 generic.go:358] "Generic (PLEG): container finished" podID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" exitCode=0 Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.844766 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerDied","Data":"ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93"} Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.844875 5099 scope.go:117] "RemoveContainer" containerID="2157bee565804156094f00eeea4b4f626827f01a14b906182e70927c4bff20c1" Dec 12 15:47:46 crc kubenswrapper[5099]: I1212 15:47:46.845431 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:47:46 crc kubenswrapper[5099]: E1212 15:47:46.846817 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:47:51 crc kubenswrapper[5099]: E1212 15:47:51.539604 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:47:51 crc kubenswrapper[5099]: E1212 15:47:51.540481 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4plbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-w8hnr_service-telemetry(911b7734-5d38-4d7f-b31b-a951d59f3fc1): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:47:51 crc kubenswrapper[5099]: E1212 15:47:51.541856 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:47:55 crc kubenswrapper[5099]: E1212 15:47:55.467629 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:47:58 crc kubenswrapper[5099]: I1212 15:47:58.467483 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:47:58 crc kubenswrapper[5099]: E1212 15:47:58.468239 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:48:02 crc kubenswrapper[5099]: E1212 15:48:02.472831 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:48:08 crc kubenswrapper[5099]: E1212 15:48:08.467935 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:48:09 crc kubenswrapper[5099]: I1212 15:48:09.466646 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:48:09 crc kubenswrapper[5099]: E1212 15:48:09.466996 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:48:14 crc kubenswrapper[5099]: E1212 15:48:14.479700 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:48:22 crc kubenswrapper[5099]: I1212 15:48:22.474745 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:48:22 crc kubenswrapper[5099]: E1212 15:48:22.475865 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:48:22 crc kubenswrapper[5099]: E1212 15:48:22.510635 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:48:26 crc kubenswrapper[5099]: E1212 15:48:26.467973 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:48:33 crc kubenswrapper[5099]: E1212 15:48:33.467482 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:48:36 crc kubenswrapper[5099]: I1212 15:48:36.485131 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:48:36 crc kubenswrapper[5099]: E1212 15:48:36.486282 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:48:41 crc kubenswrapper[5099]: E1212 15:48:41.469106 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:48:44 crc kubenswrapper[5099]: E1212 15:48:44.479401 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:48:50 crc kubenswrapper[5099]: I1212 15:48:50.476230 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:48:50 crc kubenswrapper[5099]: E1212 15:48:50.477249 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.178447 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8sjzd"] Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.194580 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.204934 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8sjzd"] Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.257192 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d26g8\" (UniqueName: \"kubernetes.io/projected/f98ee029-7292-4d97-b33b-26800db45e37-kube-api-access-d26g8\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.257296 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-utilities\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.257342 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-catalog-content\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.359234 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-utilities\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.359902 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-utilities\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.360072 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-catalog-content\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.360462 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-catalog-content\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.360551 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d26g8\" (UniqueName: \"kubernetes.io/projected/f98ee029-7292-4d97-b33b-26800db45e37-kube-api-access-d26g8\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.383312 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d26g8\" (UniqueName: \"kubernetes.io/projected/f98ee029-7292-4d97-b33b-26800db45e37-kube-api-access-d26g8\") pod \"redhat-operators-8sjzd\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.522293 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:48:52 crc kubenswrapper[5099]: I1212 15:48:52.778209 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8sjzd"] Dec 12 15:48:53 crc kubenswrapper[5099]: I1212 15:48:53.034423 5099 generic.go:358] "Generic (PLEG): container finished" podID="f98ee029-7292-4d97-b33b-26800db45e37" containerID="4d47d195085ec3f24f7cd0332d9f09fddf8b902860ec61439ab6518232fb0440" exitCode=0 Dec 12 15:48:53 crc kubenswrapper[5099]: I1212 15:48:53.034499 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sjzd" event={"ID":"f98ee029-7292-4d97-b33b-26800db45e37","Type":"ContainerDied","Data":"4d47d195085ec3f24f7cd0332d9f09fddf8b902860ec61439ab6518232fb0440"} Dec 12 15:48:53 crc kubenswrapper[5099]: I1212 15:48:53.035137 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sjzd" event={"ID":"f98ee029-7292-4d97-b33b-26800db45e37","Type":"ContainerStarted","Data":"4434b522b013b651425796ee67a686e4eb5488c7fb461304f12c91edad28ebe3"} Dec 12 15:48:54 crc kubenswrapper[5099]: I1212 15:48:54.044563 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sjzd" event={"ID":"f98ee029-7292-4d97-b33b-26800db45e37","Type":"ContainerStarted","Data":"db17fa8f0b1bdd6f9bf9d33874ecbdeb81d774d4eec34345067c4cf2969547fd"} Dec 12 15:48:55 crc kubenswrapper[5099]: I1212 15:48:55.052944 5099 generic.go:358] "Generic (PLEG): container finished" podID="f98ee029-7292-4d97-b33b-26800db45e37" containerID="db17fa8f0b1bdd6f9bf9d33874ecbdeb81d774d4eec34345067c4cf2969547fd" exitCode=0 Dec 12 15:48:55 crc kubenswrapper[5099]: I1212 15:48:55.053285 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sjzd" event={"ID":"f98ee029-7292-4d97-b33b-26800db45e37","Type":"ContainerDied","Data":"db17fa8f0b1bdd6f9bf9d33874ecbdeb81d774d4eec34345067c4cf2969547fd"} Dec 12 15:48:55 crc kubenswrapper[5099]: I1212 15:48:55.215330 5099 ???:1] "http: TLS handshake error from 192.168.126.11:42080: no serving certificate available for the kubelet" Dec 12 15:48:55 crc kubenswrapper[5099]: E1212 15:48:55.467330 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:48:56 crc kubenswrapper[5099]: I1212 15:48:56.066922 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sjzd" event={"ID":"f98ee029-7292-4d97-b33b-26800db45e37","Type":"ContainerStarted","Data":"098c0eca3610ad3280545a76b1cf91c9e8ce38a1fb08226a7f2bfcef224d39b2"} Dec 12 15:48:56 crc kubenswrapper[5099]: I1212 15:48:56.099068 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8sjzd" podStartSLOduration=3.401114291 podStartE2EDuration="4.099013749s" podCreationTimestamp="2025-12-12 15:48:52 +0000 UTC" firstStartedPulling="2025-12-12 15:48:53.035418215 +0000 UTC m=+1671.139326866" lastFinishedPulling="2025-12-12 15:48:53.733317673 +0000 UTC m=+1671.837226324" observedRunningTime="2025-12-12 15:48:56.084224963 +0000 UTC m=+1674.188133614" watchObservedRunningTime="2025-12-12 15:48:56.099013749 +0000 UTC m=+1674.202922430" Dec 12 15:48:57 crc kubenswrapper[5099]: E1212 15:48:57.468840 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:49:02 crc kubenswrapper[5099]: I1212 15:49:02.523336 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:49:02 crc kubenswrapper[5099]: I1212 15:49:02.523754 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:49:02 crc kubenswrapper[5099]: I1212 15:49:02.564873 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:49:03 crc kubenswrapper[5099]: I1212 15:49:03.181883 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:49:03 crc kubenswrapper[5099]: I1212 15:49:03.227616 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8sjzd"] Dec 12 15:49:04 crc kubenswrapper[5099]: I1212 15:49:04.467326 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:49:04 crc kubenswrapper[5099]: E1212 15:49:04.467711 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:49:05 crc kubenswrapper[5099]: I1212 15:49:05.190639 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8sjzd" podUID="f98ee029-7292-4d97-b33b-26800db45e37" containerName="registry-server" containerID="cri-o://098c0eca3610ad3280545a76b1cf91c9e8ce38a1fb08226a7f2bfcef224d39b2" gracePeriod=2 Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.211586 5099 generic.go:358] "Generic (PLEG): container finished" podID="f98ee029-7292-4d97-b33b-26800db45e37" containerID="098c0eca3610ad3280545a76b1cf91c9e8ce38a1fb08226a7f2bfcef224d39b2" exitCode=0 Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.211632 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sjzd" event={"ID":"f98ee029-7292-4d97-b33b-26800db45e37","Type":"ContainerDied","Data":"098c0eca3610ad3280545a76b1cf91c9e8ce38a1fb08226a7f2bfcef224d39b2"} Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.574740 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.587738 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d26g8\" (UniqueName: \"kubernetes.io/projected/f98ee029-7292-4d97-b33b-26800db45e37-kube-api-access-d26g8\") pod \"f98ee029-7292-4d97-b33b-26800db45e37\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.587779 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-utilities\") pod \"f98ee029-7292-4d97-b33b-26800db45e37\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.587840 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-catalog-content\") pod \"f98ee029-7292-4d97-b33b-26800db45e37\" (UID: \"f98ee029-7292-4d97-b33b-26800db45e37\") " Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.589502 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-utilities" (OuterVolumeSpecName: "utilities") pod "f98ee029-7292-4d97-b33b-26800db45e37" (UID: "f98ee029-7292-4d97-b33b-26800db45e37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.600803 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98ee029-7292-4d97-b33b-26800db45e37-kube-api-access-d26g8" (OuterVolumeSpecName: "kube-api-access-d26g8") pod "f98ee029-7292-4d97-b33b-26800db45e37" (UID: "f98ee029-7292-4d97-b33b-26800db45e37"). InnerVolumeSpecName "kube-api-access-d26g8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.688729 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d26g8\" (UniqueName: \"kubernetes.io/projected/f98ee029-7292-4d97-b33b-26800db45e37-kube-api-access-d26g8\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.688764 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.722881 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f98ee029-7292-4d97-b33b-26800db45e37" (UID: "f98ee029-7292-4d97-b33b-26800db45e37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:49:08 crc kubenswrapper[5099]: I1212 15:49:08.789480 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f98ee029-7292-4d97-b33b-26800db45e37-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:09 crc kubenswrapper[5099]: I1212 15:49:09.220601 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sjzd" Dec 12 15:49:09 crc kubenswrapper[5099]: I1212 15:49:09.220609 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sjzd" event={"ID":"f98ee029-7292-4d97-b33b-26800db45e37","Type":"ContainerDied","Data":"4434b522b013b651425796ee67a686e4eb5488c7fb461304f12c91edad28ebe3"} Dec 12 15:49:09 crc kubenswrapper[5099]: I1212 15:49:09.220783 5099 scope.go:117] "RemoveContainer" containerID="098c0eca3610ad3280545a76b1cf91c9e8ce38a1fb08226a7f2bfcef224d39b2" Dec 12 15:49:09 crc kubenswrapper[5099]: I1212 15:49:09.243508 5099 scope.go:117] "RemoveContainer" containerID="db17fa8f0b1bdd6f9bf9d33874ecbdeb81d774d4eec34345067c4cf2969547fd" Dec 12 15:49:09 crc kubenswrapper[5099]: I1212 15:49:09.256846 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8sjzd"] Dec 12 15:49:09 crc kubenswrapper[5099]: I1212 15:49:09.260568 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8sjzd"] Dec 12 15:49:09 crc kubenswrapper[5099]: I1212 15:49:09.279031 5099 scope.go:117] "RemoveContainer" containerID="4d47d195085ec3f24f7cd0332d9f09fddf8b902860ec61439ab6518232fb0440" Dec 12 15:49:09 crc kubenswrapper[5099]: E1212 15:49:09.467615 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:49:10 crc kubenswrapper[5099]: I1212 15:49:10.476642 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98ee029-7292-4d97-b33b-26800db45e37" path="/var/lib/kubelet/pods/f98ee029-7292-4d97-b33b-26800db45e37/volumes" Dec 12 15:49:12 crc kubenswrapper[5099]: E1212 15:49:12.476062 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:49:17 crc kubenswrapper[5099]: I1212 15:49:17.466934 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:49:17 crc kubenswrapper[5099]: E1212 15:49:17.467732 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:49:20 crc kubenswrapper[5099]: E1212 15:49:20.523164 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:49:20 crc kubenswrapper[5099]: E1212 15:49:20.523824 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4plbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-w8hnr_service-telemetry(911b7734-5d38-4d7f-b31b-a951d59f3fc1): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:49:20 crc kubenswrapper[5099]: E1212 15:49:20.525068 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:49:24 crc kubenswrapper[5099]: E1212 15:49:24.468223 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:49:32 crc kubenswrapper[5099]: I1212 15:49:32.477845 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:49:32 crc kubenswrapper[5099]: E1212 15:49:32.479252 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:49:34 crc kubenswrapper[5099]: E1212 15:49:34.476715 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:49:35 crc kubenswrapper[5099]: E1212 15:49:35.467594 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:49:45 crc kubenswrapper[5099]: I1212 15:49:45.466632 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:49:45 crc kubenswrapper[5099]: E1212 15:49:45.467597 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:49:46 crc kubenswrapper[5099]: E1212 15:49:46.467625 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:49:47 crc kubenswrapper[5099]: E1212 15:49:47.467523 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:49:57 crc kubenswrapper[5099]: I1212 15:49:57.467117 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:49:57 crc kubenswrapper[5099]: E1212 15:49:57.469707 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:49:59 crc kubenswrapper[5099]: E1212 15:49:59.467314 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:50:00 crc kubenswrapper[5099]: E1212 15:50:00.466731 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:50:09 crc kubenswrapper[5099]: I1212 15:50:09.466910 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:50:09 crc kubenswrapper[5099]: E1212 15:50:09.467791 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:50:11 crc kubenswrapper[5099]: E1212 15:50:11.469904 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:50:13 crc kubenswrapper[5099]: E1212 15:50:13.467543 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:50:22 crc kubenswrapper[5099]: E1212 15:50:22.474196 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:50:23 crc kubenswrapper[5099]: I1212 15:50:23.467482 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:50:23 crc kubenswrapper[5099]: E1212 15:50:23.468070 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:50:28 crc kubenswrapper[5099]: E1212 15:50:28.468927 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:50:33 crc kubenswrapper[5099]: E1212 15:50:33.467392 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:50:34 crc kubenswrapper[5099]: I1212 15:50:34.467017 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:50:34 crc kubenswrapper[5099]: E1212 15:50:34.467282 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:50:40 crc kubenswrapper[5099]: E1212 15:50:40.474061 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:50:45 crc kubenswrapper[5099]: I1212 15:50:45.467079 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:50:45 crc kubenswrapper[5099]: E1212 15:50:45.467948 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:50:48 crc kubenswrapper[5099]: E1212 15:50:48.467641 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:50:54 crc kubenswrapper[5099]: E1212 15:50:54.467970 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:50:57 crc kubenswrapper[5099]: I1212 15:50:57.466645 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:50:57 crc kubenswrapper[5099]: E1212 15:50:57.468246 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:50:59 crc kubenswrapper[5099]: E1212 15:50:59.467495 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:51:03 crc kubenswrapper[5099]: I1212 15:51:03.122384 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:51:03 crc kubenswrapper[5099]: I1212 15:51:03.134259 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:51:03 crc kubenswrapper[5099]: I1212 15:51:03.137032 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:51:03 crc kubenswrapper[5099]: I1212 15:51:03.144896 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:51:03 crc kubenswrapper[5099]: I1212 15:51:03.147098 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:51:03 crc kubenswrapper[5099]: I1212 15:51:03.155868 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:51:09 crc kubenswrapper[5099]: E1212 15:51:09.467097 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:51:10 crc kubenswrapper[5099]: I1212 15:51:10.467955 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:51:10 crc kubenswrapper[5099]: E1212 15:51:10.468701 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:51:14 crc kubenswrapper[5099]: E1212 15:51:14.467941 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:51:23 crc kubenswrapper[5099]: E1212 15:51:23.467396 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:51:24 crc kubenswrapper[5099]: I1212 15:51:24.467421 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:51:24 crc kubenswrapper[5099]: E1212 15:51:24.467949 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:51:27 crc kubenswrapper[5099]: E1212 15:51:27.467618 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:51:36 crc kubenswrapper[5099]: I1212 15:51:36.467090 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:51:36 crc kubenswrapper[5099]: E1212 15:51:36.467911 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:51:38 crc kubenswrapper[5099]: E1212 15:51:38.467306 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:51:40 crc kubenswrapper[5099]: E1212 15:51:40.472064 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:51:50 crc kubenswrapper[5099]: I1212 15:51:50.467330 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:51:50 crc kubenswrapper[5099]: E1212 15:51:50.468142 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:51:53 crc kubenswrapper[5099]: E1212 15:51:53.467254 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:51:55 crc kubenswrapper[5099]: E1212 15:51:55.467408 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:52:01 crc kubenswrapper[5099]: I1212 15:52:01.468370 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:52:01 crc kubenswrapper[5099]: E1212 15:52:01.469305 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:52:04 crc kubenswrapper[5099]: E1212 15:52:04.468223 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:52:10 crc kubenswrapper[5099]: E1212 15:52:10.540813 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:52:10 crc kubenswrapper[5099]: E1212 15:52:10.541761 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4plbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-w8hnr_service-telemetry(911b7734-5d38-4d7f-b31b-a951d59f3fc1): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:52:10 crc kubenswrapper[5099]: E1212 15:52:10.543015 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:52:15 crc kubenswrapper[5099]: I1212 15:52:15.467273 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:52:15 crc kubenswrapper[5099]: E1212 15:52:15.468343 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:52:16 crc kubenswrapper[5099]: E1212 15:52:16.524712 5099 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 15:52:16 crc kubenswrapper[5099]: E1212 15:52:16.524924 5099 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-42fpw_service-telemetry(362b85ef-f3b4-4656-bd6f-567457c085aa): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 15:52:16 crc kubenswrapper[5099]: E1212 15:52:16.526055 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:52:21 crc kubenswrapper[5099]: E1212 15:52:21.470268 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:52:26 crc kubenswrapper[5099]: I1212 15:52:26.468081 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:52:26 crc kubenswrapper[5099]: E1212 15:52:26.468811 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:52:28 crc kubenswrapper[5099]: E1212 15:52:28.468516 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:52:34 crc kubenswrapper[5099]: I1212 15:52:34.467952 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:52:34 crc kubenswrapper[5099]: E1212 15:52:34.468966 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.536389 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjxkq/must-gather-ksqqc"] Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.537834 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f98ee029-7292-4d97-b33b-26800db45e37" containerName="extract-content" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.537863 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98ee029-7292-4d97-b33b-26800db45e37" containerName="extract-content" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.537890 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f98ee029-7292-4d97-b33b-26800db45e37" containerName="registry-server" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.537899 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98ee029-7292-4d97-b33b-26800db45e37" containerName="registry-server" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.537936 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f98ee029-7292-4d97-b33b-26800db45e37" containerName="extract-utilities" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.537944 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98ee029-7292-4d97-b33b-26800db45e37" containerName="extract-utilities" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.538067 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f98ee029-7292-4d97-b33b-26800db45e37" containerName="registry-server" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.554090 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wjxkq/must-gather-ksqqc"] Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.554249 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.559148 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wjxkq\"/\"kube-root-ca.crt\"" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.559753 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wjxkq\"/\"default-dockercfg-n9dm7\"" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.576374 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wjxkq\"/\"openshift-service-ca.crt\"" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.624613 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4f9442f2-ea1b-433e-a652-887e70efb629-must-gather-output\") pod \"must-gather-ksqqc\" (UID: \"4f9442f2-ea1b-433e-a652-887e70efb629\") " pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.625108 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llt4m\" (UniqueName: \"kubernetes.io/projected/4f9442f2-ea1b-433e-a652-887e70efb629-kube-api-access-llt4m\") pod \"must-gather-ksqqc\" (UID: \"4f9442f2-ea1b-433e-a652-887e70efb629\") " pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.726892 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4f9442f2-ea1b-433e-a652-887e70efb629-must-gather-output\") pod \"must-gather-ksqqc\" (UID: \"4f9442f2-ea1b-433e-a652-887e70efb629\") " pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.727245 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llt4m\" (UniqueName: \"kubernetes.io/projected/4f9442f2-ea1b-433e-a652-887e70efb629-kube-api-access-llt4m\") pod \"must-gather-ksqqc\" (UID: \"4f9442f2-ea1b-433e-a652-887e70efb629\") " pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.727527 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4f9442f2-ea1b-433e-a652-887e70efb629-must-gather-output\") pod \"must-gather-ksqqc\" (UID: \"4f9442f2-ea1b-433e-a652-887e70efb629\") " pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.757117 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llt4m\" (UniqueName: \"kubernetes.io/projected/4f9442f2-ea1b-433e-a652-887e70efb629-kube-api-access-llt4m\") pod \"must-gather-ksqqc\" (UID: \"4f9442f2-ea1b-433e-a652-887e70efb629\") " pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:52:35 crc kubenswrapper[5099]: I1212 15:52:35.879796 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:52:36 crc kubenswrapper[5099]: I1212 15:52:36.339748 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wjxkq/must-gather-ksqqc"] Dec 12 15:52:37 crc kubenswrapper[5099]: I1212 15:52:37.153647 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" event={"ID":"4f9442f2-ea1b-433e-a652-887e70efb629","Type":"ContainerStarted","Data":"674fe1e75d68bc5303e47b0d0d9d653f0fed40ec0f2263479220c5cd3b3f1eec"} Dec 12 15:52:37 crc kubenswrapper[5099]: I1212 15:52:37.467362 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:52:37 crc kubenswrapper[5099]: E1212 15:52:37.467823 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwqjz_openshift-machine-config-operator(eeb52909-7783-4c4f-a55a-9f4333d025bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" Dec 12 15:52:43 crc kubenswrapper[5099]: I1212 15:52:43.198855 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" event={"ID":"4f9442f2-ea1b-433e-a652-887e70efb629","Type":"ContainerStarted","Data":"1013817c69ef11fa525d9ca9663ee4c56a4816e33aaa5b9d38b132372a5001ad"} Dec 12 15:52:43 crc kubenswrapper[5099]: E1212 15:52:43.467483 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:52:44 crc kubenswrapper[5099]: I1212 15:52:44.209289 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" event={"ID":"4f9442f2-ea1b-433e-a652-887e70efb629","Type":"ContainerStarted","Data":"7be67489c25f6a771526790a19418b5e51f264bca70a3e523965e62337b23dc7"} Dec 12 15:52:44 crc kubenswrapper[5099]: I1212 15:52:44.232040 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" podStartSLOduration=2.907575913 podStartE2EDuration="9.232011134s" podCreationTimestamp="2025-12-12 15:52:35 +0000 UTC" firstStartedPulling="2025-12-12 15:52:36.356354053 +0000 UTC m=+1894.460262724" lastFinishedPulling="2025-12-12 15:52:42.680789304 +0000 UTC m=+1900.784697945" observedRunningTime="2025-12-12 15:52:44.224652998 +0000 UTC m=+1902.328561679" watchObservedRunningTime="2025-12-12 15:52:44.232011134 +0000 UTC m=+1902.335919795" Dec 12 15:52:45 crc kubenswrapper[5099]: I1212 15:52:45.524065 5099 ???:1] "http: TLS handshake error from 192.168.126.11:35676: no serving certificate available for the kubelet" Dec 12 15:52:46 crc kubenswrapper[5099]: E1212 15:52:46.467976 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:52:48 crc kubenswrapper[5099]: I1212 15:52:48.467098 5099 scope.go:117] "RemoveContainer" containerID="ad67bcec8a6eecfac5ce268d3584582863fbb011d67aa9220263468f2fd09c93" Dec 12 15:52:49 crc kubenswrapper[5099]: I1212 15:52:49.245503 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" event={"ID":"eeb52909-7783-4c4f-a55a-9f4333d025bc","Type":"ContainerStarted","Data":"0da9623bec3bc98e5696baf3952c1de2f0464497aa795d22b8893d807f57b942"} Dec 12 15:52:55 crc kubenswrapper[5099]: E1212 15:52:55.467949 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:52:59 crc kubenswrapper[5099]: E1212 15:52:59.177294 5099 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 12 15:52:59 crc kubenswrapper[5099]: E1212 15:52:59.467583 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.222956 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.234199 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.259841 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53390: no serving certificate available for the kubelet" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.290490 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53396: no serving certificate available for the kubelet" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.322757 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53400: no serving certificate available for the kubelet" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.378571 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53404: no serving certificate available for the kubelet" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.440001 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53414: no serving certificate available for the kubelet" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.542902 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53430: no serving certificate available for the kubelet" Dec 12 15:53:03 crc kubenswrapper[5099]: I1212 15:53:03.725230 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53436: no serving certificate available for the kubelet" Dec 12 15:53:04 crc kubenswrapper[5099]: I1212 15:53:04.076203 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53438: no serving certificate available for the kubelet" Dec 12 15:53:04 crc kubenswrapper[5099]: I1212 15:53:04.742299 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53452: no serving certificate available for the kubelet" Dec 12 15:53:06 crc kubenswrapper[5099]: I1212 15:53:06.050933 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53460: no serving certificate available for the kubelet" Dec 12 15:53:07 crc kubenswrapper[5099]: E1212 15:53:07.467893 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:53:08 crc kubenswrapper[5099]: I1212 15:53:08.636355 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53470: no serving certificate available for the kubelet" Dec 12 15:53:11 crc kubenswrapper[5099]: E1212 15:53:11.467465 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:53:13 crc kubenswrapper[5099]: I1212 15:53:13.787836 5099 ???:1] "http: TLS handshake error from 192.168.126.11:33322: no serving certificate available for the kubelet" Dec 12 15:53:20 crc kubenswrapper[5099]: E1212 15:53:20.469950 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:53:21 crc kubenswrapper[5099]: I1212 15:53:21.642063 5099 ???:1] "http: TLS handshake error from 192.168.126.11:42364: no serving certificate available for the kubelet" Dec 12 15:53:21 crc kubenswrapper[5099]: I1212 15:53:21.819442 5099 ???:1] "http: TLS handshake error from 192.168.126.11:42376: no serving certificate available for the kubelet" Dec 12 15:53:21 crc kubenswrapper[5099]: I1212 15:53:21.853217 5099 ???:1] "http: TLS handshake error from 192.168.126.11:42392: no serving certificate available for the kubelet" Dec 12 15:53:23 crc kubenswrapper[5099]: E1212 15:53:23.467793 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:53:24 crc kubenswrapper[5099]: I1212 15:53:24.055697 5099 ???:1] "http: TLS handshake error from 192.168.126.11:42406: no serving certificate available for the kubelet" Dec 12 15:53:32 crc kubenswrapper[5099]: E1212 15:53:32.472909 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:53:32 crc kubenswrapper[5099]: I1212 15:53:32.974029 5099 ???:1] "http: TLS handshake error from 192.168.126.11:33262: no serving certificate available for the kubelet" Dec 12 15:53:33 crc kubenswrapper[5099]: I1212 15:53:33.095689 5099 ???:1] "http: TLS handshake error from 192.168.126.11:33272: no serving certificate available for the kubelet" Dec 12 15:53:33 crc kubenswrapper[5099]: I1212 15:53:33.176213 5099 ???:1] "http: TLS handshake error from 192.168.126.11:33276: no serving certificate available for the kubelet" Dec 12 15:53:34 crc kubenswrapper[5099]: E1212 15:53:34.467132 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:53:44 crc kubenswrapper[5099]: I1212 15:53:44.561888 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57908: no serving certificate available for the kubelet" Dec 12 15:53:47 crc kubenswrapper[5099]: E1212 15:53:47.468771 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:53:47 crc kubenswrapper[5099]: I1212 15:53:47.708093 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57916: no serving certificate available for the kubelet" Dec 12 15:53:47 crc kubenswrapper[5099]: I1212 15:53:47.916505 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57930: no serving certificate available for the kubelet" Dec 12 15:53:47 crc kubenswrapper[5099]: I1212 15:53:47.975528 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57934: no serving certificate available for the kubelet" Dec 12 15:53:47 crc kubenswrapper[5099]: I1212 15:53:47.981614 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57950: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.203691 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57952: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.214737 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57954: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.223088 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57962: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.335888 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57976: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.522960 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57980: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.544083 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57990: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.544999 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58006: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.707937 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58016: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.709281 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58022: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.723869 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58034: no serving certificate available for the kubelet" Dec 12 15:53:48 crc kubenswrapper[5099]: I1212 15:53:48.938738 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58044: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.053203 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58056: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.053799 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58068: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.086268 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58080: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.260080 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58086: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.260706 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58100: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.409518 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58112: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: E1212 15:53:49.467624 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.574935 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58124: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.745657 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58134: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.746967 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58136: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.748407 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58142: no serving certificate available for the kubelet" Dec 12 15:53:49 crc kubenswrapper[5099]: I1212 15:53:49.995816 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58158: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.019224 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58174: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.038227 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58190: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.159527 5099 ???:1] "http: TLS handshake error from 192.168.126.11:58198: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.345690 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55056: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.390740 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55060: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.435392 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55072: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.746574 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55074: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.752321 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55080: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.760981 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55082: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.935597 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55096: no serving certificate available for the kubelet" Dec 12 15:53:50 crc kubenswrapper[5099]: I1212 15:53:50.969624 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55104: no serving certificate available for the kubelet" Dec 12 15:53:51 crc kubenswrapper[5099]: I1212 15:53:51.110447 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55116: no serving certificate available for the kubelet" Dec 12 15:53:51 crc kubenswrapper[5099]: I1212 15:53:51.123265 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55128: no serving certificate available for the kubelet" Dec 12 15:53:51 crc kubenswrapper[5099]: I1212 15:53:51.144563 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55144: no serving certificate available for the kubelet" Dec 12 15:53:51 crc kubenswrapper[5099]: I1212 15:53:51.320519 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55148: no serving certificate available for the kubelet" Dec 12 15:53:51 crc kubenswrapper[5099]: I1212 15:53:51.327963 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55164: no serving certificate available for the kubelet" Dec 12 15:53:51 crc kubenswrapper[5099]: I1212 15:53:51.331231 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55180: no serving certificate available for the kubelet" Dec 12 15:54:02 crc kubenswrapper[5099]: E1212 15:54:02.474846 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:54:02 crc kubenswrapper[5099]: E1212 15:54:02.474955 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:54:02 crc kubenswrapper[5099]: I1212 15:54:02.896451 5099 ???:1] "http: TLS handshake error from 192.168.126.11:34478: no serving certificate available for the kubelet" Dec 12 15:54:03 crc kubenswrapper[5099]: I1212 15:54:03.018446 5099 ???:1] "http: TLS handshake error from 192.168.126.11:34488: no serving certificate available for the kubelet" Dec 12 15:54:03 crc kubenswrapper[5099]: I1212 15:54:03.035708 5099 ???:1] "http: TLS handshake error from 192.168.126.11:34498: no serving certificate available for the kubelet" Dec 12 15:54:03 crc kubenswrapper[5099]: I1212 15:54:03.203689 5099 ???:1] "http: TLS handshake error from 192.168.126.11:34500: no serving certificate available for the kubelet" Dec 12 15:54:03 crc kubenswrapper[5099]: I1212 15:54:03.209541 5099 ???:1] "http: TLS handshake error from 192.168.126.11:34506: no serving certificate available for the kubelet" Dec 12 15:54:14 crc kubenswrapper[5099]: E1212 15:54:14.468867 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.665433 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vdcds"] Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.691319 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vdcds"] Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.691484 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.807944 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-catalog-content\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.808031 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-utilities\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.808077 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmplg\" (UniqueName: \"kubernetes.io/projected/f6ea1b68-9af3-4ef8-9147-20361514d2f6-kube-api-access-rmplg\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.909841 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-catalog-content\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.909915 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-utilities\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.909957 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmplg\" (UniqueName: \"kubernetes.io/projected/f6ea1b68-9af3-4ef8-9147-20361514d2f6-kube-api-access-rmplg\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.910489 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-catalog-content\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.910526 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-utilities\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:14 crc kubenswrapper[5099]: I1212 15:54:14.941566 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmplg\" (UniqueName: \"kubernetes.io/projected/f6ea1b68-9af3-4ef8-9147-20361514d2f6-kube-api-access-rmplg\") pod \"certified-operators-vdcds\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:15 crc kubenswrapper[5099]: I1212 15:54:15.019439 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:15 crc kubenswrapper[5099]: I1212 15:54:15.545137 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vdcds"] Dec 12 15:54:15 crc kubenswrapper[5099]: W1212 15:54:15.556023 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ea1b68_9af3_4ef8_9147_20361514d2f6.slice/crio-ee9858aae509b663ca2a9762474ee820a940310d56627ce9f742c113c04a8ea3 WatchSource:0}: Error finding container ee9858aae509b663ca2a9762474ee820a940310d56627ce9f742c113c04a8ea3: Status 404 returned error can't find the container with id ee9858aae509b663ca2a9762474ee820a940310d56627ce9f742c113c04a8ea3 Dec 12 15:54:16 crc kubenswrapper[5099]: I1212 15:54:16.109223 5099 generic.go:358] "Generic (PLEG): container finished" podID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerID="c0a62f1ff24f3f02131a352bc7fcca80904696214ae6172fb85782e5f6ebdcb0" exitCode=0 Dec 12 15:54:16 crc kubenswrapper[5099]: I1212 15:54:16.109549 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdcds" event={"ID":"f6ea1b68-9af3-4ef8-9147-20361514d2f6","Type":"ContainerDied","Data":"c0a62f1ff24f3f02131a352bc7fcca80904696214ae6172fb85782e5f6ebdcb0"} Dec 12 15:54:16 crc kubenswrapper[5099]: I1212 15:54:16.109593 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdcds" event={"ID":"f6ea1b68-9af3-4ef8-9147-20361514d2f6","Type":"ContainerStarted","Data":"ee9858aae509b663ca2a9762474ee820a940310d56627ce9f742c113c04a8ea3"} Dec 12 15:54:16 crc kubenswrapper[5099]: E1212 15:54:16.480604 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:54:18 crc kubenswrapper[5099]: I1212 15:54:18.128028 5099 generic.go:358] "Generic (PLEG): container finished" podID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerID="65eaf90678c0b86a6b2cde00d8f2123d8ecc8cb97b43365140085dfae9c3c3c2" exitCode=0 Dec 12 15:54:18 crc kubenswrapper[5099]: I1212 15:54:18.128426 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdcds" event={"ID":"f6ea1b68-9af3-4ef8-9147-20361514d2f6","Type":"ContainerDied","Data":"65eaf90678c0b86a6b2cde00d8f2123d8ecc8cb97b43365140085dfae9c3c3c2"} Dec 12 15:54:19 crc kubenswrapper[5099]: I1212 15:54:19.138346 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdcds" event={"ID":"f6ea1b68-9af3-4ef8-9147-20361514d2f6","Type":"ContainerStarted","Data":"ff94e2015d51831d9c310946006dd5b72d343a1e6805d70fe3e6dc8eb0210fc5"} Dec 12 15:54:19 crc kubenswrapper[5099]: I1212 15:54:19.162178 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vdcds" podStartSLOduration=4.226916976 podStartE2EDuration="5.162141655s" podCreationTimestamp="2025-12-12 15:54:14 +0000 UTC" firstStartedPulling="2025-12-12 15:54:16.11011712 +0000 UTC m=+1994.214025751" lastFinishedPulling="2025-12-12 15:54:17.045341799 +0000 UTC m=+1995.149250430" observedRunningTime="2025-12-12 15:54:19.16035014 +0000 UTC m=+1997.264258791" watchObservedRunningTime="2025-12-12 15:54:19.162141655 +0000 UTC m=+1997.266050296" Dec 12 15:54:25 crc kubenswrapper[5099]: I1212 15:54:25.020776 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:25 crc kubenswrapper[5099]: I1212 15:54:25.021230 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:25 crc kubenswrapper[5099]: I1212 15:54:25.092264 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:25 crc kubenswrapper[5099]: I1212 15:54:25.239397 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:25 crc kubenswrapper[5099]: I1212 15:54:25.329922 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vdcds"] Dec 12 15:54:25 crc kubenswrapper[5099]: I1212 15:54:25.553450 5099 ???:1] "http: TLS handshake error from 192.168.126.11:60322: no serving certificate available for the kubelet" Dec 12 15:54:26 crc kubenswrapper[5099]: E1212 15:54:26.468138 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:54:27 crc kubenswrapper[5099]: I1212 15:54:27.200549 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vdcds" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerName="registry-server" containerID="cri-o://ff94e2015d51831d9c310946006dd5b72d343a1e6805d70fe3e6dc8eb0210fc5" gracePeriod=2 Dec 12 15:54:27 crc kubenswrapper[5099]: E1212 15:54:27.468058 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.210018 5099 generic.go:358] "Generic (PLEG): container finished" podID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerID="ff94e2015d51831d9c310946006dd5b72d343a1e6805d70fe3e6dc8eb0210fc5" exitCode=0 Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.210312 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdcds" event={"ID":"f6ea1b68-9af3-4ef8-9147-20361514d2f6","Type":"ContainerDied","Data":"ff94e2015d51831d9c310946006dd5b72d343a1e6805d70fe3e6dc8eb0210fc5"} Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.742198 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.836941 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmplg\" (UniqueName: \"kubernetes.io/projected/f6ea1b68-9af3-4ef8-9147-20361514d2f6-kube-api-access-rmplg\") pod \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.837053 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-utilities\") pod \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.837125 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-catalog-content\") pod \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\" (UID: \"f6ea1b68-9af3-4ef8-9147-20361514d2f6\") " Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.838592 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-utilities" (OuterVolumeSpecName: "utilities") pod "f6ea1b68-9af3-4ef8-9147-20361514d2f6" (UID: "f6ea1b68-9af3-4ef8-9147-20361514d2f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.845346 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ea1b68-9af3-4ef8-9147-20361514d2f6-kube-api-access-rmplg" (OuterVolumeSpecName: "kube-api-access-rmplg") pod "f6ea1b68-9af3-4ef8-9147-20361514d2f6" (UID: "f6ea1b68-9af3-4ef8-9147-20361514d2f6"). InnerVolumeSpecName "kube-api-access-rmplg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.875275 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ea1b68-9af3-4ef8-9147-20361514d2f6" (UID: "f6ea1b68-9af3-4ef8-9147-20361514d2f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.939175 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmplg\" (UniqueName: \"kubernetes.io/projected/f6ea1b68-9af3-4ef8-9147-20361514d2f6-kube-api-access-rmplg\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.939222 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:28 crc kubenswrapper[5099]: I1212 15:54:28.939231 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ea1b68-9af3-4ef8-9147-20361514d2f6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:29 crc kubenswrapper[5099]: I1212 15:54:29.240550 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdcds" event={"ID":"f6ea1b68-9af3-4ef8-9147-20361514d2f6","Type":"ContainerDied","Data":"ee9858aae509b663ca2a9762474ee820a940310d56627ce9f742c113c04a8ea3"} Dec 12 15:54:29 crc kubenswrapper[5099]: I1212 15:54:29.240721 5099 scope.go:117] "RemoveContainer" containerID="ff94e2015d51831d9c310946006dd5b72d343a1e6805d70fe3e6dc8eb0210fc5" Dec 12 15:54:29 crc kubenswrapper[5099]: I1212 15:54:29.241046 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdcds" Dec 12 15:54:29 crc kubenswrapper[5099]: I1212 15:54:29.270308 5099 scope.go:117] "RemoveContainer" containerID="65eaf90678c0b86a6b2cde00d8f2123d8ecc8cb97b43365140085dfae9c3c3c2" Dec 12 15:54:29 crc kubenswrapper[5099]: I1212 15:54:29.304051 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vdcds"] Dec 12 15:54:29 crc kubenswrapper[5099]: I1212 15:54:29.505130 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vdcds"] Dec 12 15:54:29 crc kubenswrapper[5099]: I1212 15:54:29.509013 5099 scope.go:117] "RemoveContainer" containerID="c0a62f1ff24f3f02131a352bc7fcca80904696214ae6172fb85782e5f6ebdcb0" Dec 12 15:54:30 crc kubenswrapper[5099]: I1212 15:54:30.476492 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" path="/var/lib/kubelet/pods/f6ea1b68-9af3-4ef8-9147-20361514d2f6/volumes" Dec 12 15:54:38 crc kubenswrapper[5099]: E1212 15:54:38.468532 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:54:40 crc kubenswrapper[5099]: I1212 15:54:40.423736 5099 generic.go:358] "Generic (PLEG): container finished" podID="4f9442f2-ea1b-433e-a652-887e70efb629" containerID="1013817c69ef11fa525d9ca9663ee4c56a4816e33aaa5b9d38b132372a5001ad" exitCode=0 Dec 12 15:54:40 crc kubenswrapper[5099]: I1212 15:54:40.423889 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" event={"ID":"4f9442f2-ea1b-433e-a652-887e70efb629","Type":"ContainerDied","Data":"1013817c69ef11fa525d9ca9663ee4c56a4816e33aaa5b9d38b132372a5001ad"} Dec 12 15:54:40 crc kubenswrapper[5099]: I1212 15:54:40.424654 5099 scope.go:117] "RemoveContainer" containerID="1013817c69ef11fa525d9ca9663ee4c56a4816e33aaa5b9d38b132372a5001ad" Dec 12 15:54:40 crc kubenswrapper[5099]: E1212 15:54:40.468454 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.541543 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43494: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.683435 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43506: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.694114 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43510: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.715192 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43526: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.724603 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43530: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.741086 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43544: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.753616 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43560: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.769034 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43564: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.780927 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43580: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.933034 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43596: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.944974 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43598: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.969701 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43600: no serving certificate available for the kubelet" Dec 12 15:54:45 crc kubenswrapper[5099]: I1212 15:54:45.981070 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43606: no serving certificate available for the kubelet" Dec 12 15:54:46 crc kubenswrapper[5099]: I1212 15:54:46.001368 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43614: no serving certificate available for the kubelet" Dec 12 15:54:46 crc kubenswrapper[5099]: I1212 15:54:46.024590 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43622: no serving certificate available for the kubelet" Dec 12 15:54:46 crc kubenswrapper[5099]: I1212 15:54:46.046133 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43628: no serving certificate available for the kubelet" Dec 12 15:54:46 crc kubenswrapper[5099]: I1212 15:54:46.059861 5099 ???:1] "http: TLS handshake error from 192.168.126.11:43632: no serving certificate available for the kubelet" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.112934 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjxkq/must-gather-ksqqc"] Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.114550 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" containerName="copy" containerID="cri-o://7be67489c25f6a771526790a19418b5e51f264bca70a3e523965e62337b23dc7" gracePeriod=2 Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.116907 5099 status_manager.go:895] "Failed to get status for pod" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" err="pods \"must-gather-ksqqc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wjxkq\": no relationship found between node 'crc' and this object" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.126194 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjxkq/must-gather-ksqqc"] Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.505258 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjxkq_must-gather-ksqqc_4f9442f2-ea1b-433e-a652-887e70efb629/copy/0.log" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.505691 5099 generic.go:358] "Generic (PLEG): container finished" podID="4f9442f2-ea1b-433e-a652-887e70efb629" containerID="7be67489c25f6a771526790a19418b5e51f264bca70a3e523965e62337b23dc7" exitCode=143 Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.586529 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjxkq_must-gather-ksqqc_4f9442f2-ea1b-433e-a652-887e70efb629/copy/0.log" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.587069 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.588594 5099 status_manager.go:895] "Failed to get status for pod" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" err="pods \"must-gather-ksqqc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wjxkq\": no relationship found between node 'crc' and this object" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.641417 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4f9442f2-ea1b-433e-a652-887e70efb629-must-gather-output\") pod \"4f9442f2-ea1b-433e-a652-887e70efb629\" (UID: \"4f9442f2-ea1b-433e-a652-887e70efb629\") " Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.641625 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llt4m\" (UniqueName: \"kubernetes.io/projected/4f9442f2-ea1b-433e-a652-887e70efb629-kube-api-access-llt4m\") pod \"4f9442f2-ea1b-433e-a652-887e70efb629\" (UID: \"4f9442f2-ea1b-433e-a652-887e70efb629\") " Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.647988 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9442f2-ea1b-433e-a652-887e70efb629-kube-api-access-llt4m" (OuterVolumeSpecName: "kube-api-access-llt4m") pod "4f9442f2-ea1b-433e-a652-887e70efb629" (UID: "4f9442f2-ea1b-433e-a652-887e70efb629"). InnerVolumeSpecName "kube-api-access-llt4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.689174 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f9442f2-ea1b-433e-a652-887e70efb629-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4f9442f2-ea1b-433e-a652-887e70efb629" (UID: "4f9442f2-ea1b-433e-a652-887e70efb629"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.742984 5099 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4f9442f2-ea1b-433e-a652-887e70efb629-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:51 crc kubenswrapper[5099]: I1212 15:54:51.743029 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-llt4m\" (UniqueName: \"kubernetes.io/projected/4f9442f2-ea1b-433e-a652-887e70efb629-kube-api-access-llt4m\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:52 crc kubenswrapper[5099]: E1212 15:54:52.472238 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:54:52 crc kubenswrapper[5099]: I1212 15:54:52.472733 5099 status_manager.go:895] "Failed to get status for pod" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" err="pods \"must-gather-ksqqc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wjxkq\": no relationship found between node 'crc' and this object" Dec 12 15:54:52 crc kubenswrapper[5099]: I1212 15:54:52.474689 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" path="/var/lib/kubelet/pods/4f9442f2-ea1b-433e-a652-887e70efb629/volumes" Dec 12 15:54:52 crc kubenswrapper[5099]: I1212 15:54:52.514415 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjxkq_must-gather-ksqqc_4f9442f2-ea1b-433e-a652-887e70efb629/copy/0.log" Dec 12 15:54:52 crc kubenswrapper[5099]: I1212 15:54:52.514882 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjxkq/must-gather-ksqqc" Dec 12 15:54:52 crc kubenswrapper[5099]: I1212 15:54:52.514988 5099 scope.go:117] "RemoveContainer" containerID="7be67489c25f6a771526790a19418b5e51f264bca70a3e523965e62337b23dc7" Dec 12 15:54:52 crc kubenswrapper[5099]: I1212 15:54:52.532919 5099 scope.go:117] "RemoveContainer" containerID="1013817c69ef11fa525d9ca9663ee4c56a4816e33aaa5b9d38b132372a5001ad" Dec 12 15:54:53 crc kubenswrapper[5099]: E1212 15:54:53.468389 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:55:05 crc kubenswrapper[5099]: E1212 15:55:05.468903 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:55:06 crc kubenswrapper[5099]: E1212 15:55:06.468259 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:55:16 crc kubenswrapper[5099]: E1212 15:55:16.480861 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:55:16 crc kubenswrapper[5099]: I1212 15:55:16.515983 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:55:16 crc kubenswrapper[5099]: I1212 15:55:16.516383 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:55:17 crc kubenswrapper[5099]: E1212 15:55:17.467363 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:55:29 crc kubenswrapper[5099]: E1212 15:55:29.468181 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:55:30 crc kubenswrapper[5099]: E1212 15:55:30.467417 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.411118 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tswnb"] Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412495 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" containerName="gather" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412522 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" containerName="gather" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412538 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerName="extract-utilities" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412545 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerName="extract-utilities" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412555 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerName="registry-server" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412562 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerName="registry-server" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412581 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerName="extract-content" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412587 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerName="extract-content" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412597 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" containerName="copy" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412602 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" containerName="copy" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412785 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" containerName="gather" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412796 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f9442f2-ea1b-433e-a652-887e70efb629" containerName="copy" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.412810 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f6ea1b68-9af3-4ef8-9147-20361514d2f6" containerName="registry-server" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.429087 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tswnb"] Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.429298 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.577768 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-utilities\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.577957 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-catalog-content\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.577990 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6264\" (UniqueName: \"kubernetes.io/projected/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-kube-api-access-v6264\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.679526 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-utilities\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.679620 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-catalog-content\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.679654 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6264\" (UniqueName: \"kubernetes.io/projected/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-kube-api-access-v6264\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.680589 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-utilities\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.680756 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-catalog-content\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.700835 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6264\" (UniqueName: \"kubernetes.io/projected/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-kube-api-access-v6264\") pod \"community-operators-tswnb\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:38 crc kubenswrapper[5099]: I1212 15:55:38.750428 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:39 crc kubenswrapper[5099]: I1212 15:55:39.071848 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tswnb"] Dec 12 15:55:39 crc kubenswrapper[5099]: I1212 15:55:39.994628 5099 generic.go:358] "Generic (PLEG): container finished" podID="56b8b12c-d2f7-4d6e-9e07-d9952d5c897d" containerID="ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836" exitCode=0 Dec 12 15:55:39 crc kubenswrapper[5099]: I1212 15:55:39.994685 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tswnb" event={"ID":"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d","Type":"ContainerDied","Data":"ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836"} Dec 12 15:55:39 crc kubenswrapper[5099]: I1212 15:55:39.995026 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tswnb" event={"ID":"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d","Type":"ContainerStarted","Data":"258ab6afc6e5d872ebb158f64f9e2aa5a1fc1856e4d301446a89fc6b43a7fa76"} Dec 12 15:55:41 crc kubenswrapper[5099]: I1212 15:55:41.091936 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tswnb" event={"ID":"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d","Type":"ContainerStarted","Data":"034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425"} Dec 12 15:55:42 crc kubenswrapper[5099]: I1212 15:55:42.099706 5099 generic.go:358] "Generic (PLEG): container finished" podID="56b8b12c-d2f7-4d6e-9e07-d9952d5c897d" containerID="034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425" exitCode=0 Dec 12 15:55:42 crc kubenswrapper[5099]: I1212 15:55:42.099801 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tswnb" event={"ID":"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d","Type":"ContainerDied","Data":"034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425"} Dec 12 15:55:42 crc kubenswrapper[5099]: E1212 15:55:42.485900 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:55:43 crc kubenswrapper[5099]: I1212 15:55:43.108882 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tswnb" event={"ID":"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d","Type":"ContainerStarted","Data":"b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a"} Dec 12 15:55:44 crc kubenswrapper[5099]: E1212 15:55:44.468125 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:55:46 crc kubenswrapper[5099]: I1212 15:55:46.516258 5099 patch_prober.go:28] interesting pod/machine-config-daemon-qwqjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:55:46 crc kubenswrapper[5099]: I1212 15:55:46.516591 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwqjz" podUID="eeb52909-7783-4c4f-a55a-9f4333d025bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:55:47 crc kubenswrapper[5099]: I1212 15:55:47.504802 5099 ???:1] "http: TLS handshake error from 192.168.126.11:38026: no serving certificate available for the kubelet" Dec 12 15:55:48 crc kubenswrapper[5099]: I1212 15:55:48.750803 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:48 crc kubenswrapper[5099]: I1212 15:55:48.752251 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:48 crc kubenswrapper[5099]: I1212 15:55:48.806832 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:48 crc kubenswrapper[5099]: I1212 15:55:48.828433 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tswnb" podStartSLOduration=10.156495669 podStartE2EDuration="10.828416212s" podCreationTimestamp="2025-12-12 15:55:38 +0000 UTC" firstStartedPulling="2025-12-12 15:55:39.995870024 +0000 UTC m=+2078.099778665" lastFinishedPulling="2025-12-12 15:55:40.667790567 +0000 UTC m=+2078.771699208" observedRunningTime="2025-12-12 15:55:43.125013968 +0000 UTC m=+2081.228922609" watchObservedRunningTime="2025-12-12 15:55:48.828416212 +0000 UTC m=+2086.932324853" Dec 12 15:55:49 crc kubenswrapper[5099]: I1212 15:55:49.192393 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:49 crc kubenswrapper[5099]: I1212 15:55:49.231221 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tswnb"] Dec 12 15:55:51 crc kubenswrapper[5099]: I1212 15:55:51.169814 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tswnb" podUID="56b8b12c-d2f7-4d6e-9e07-d9952d5c897d" containerName="registry-server" containerID="cri-o://b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a" gracePeriod=2 Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.042396 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.154052 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-catalog-content\") pod \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.154243 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6264\" (UniqueName: \"kubernetes.io/projected/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-kube-api-access-v6264\") pod \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.154326 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-utilities\") pod \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\" (UID: \"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d\") " Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.155419 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-utilities" (OuterVolumeSpecName: "utilities") pod "56b8b12c-d2f7-4d6e-9e07-d9952d5c897d" (UID: "56b8b12c-d2f7-4d6e-9e07-d9952d5c897d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.161655 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-kube-api-access-v6264" (OuterVolumeSpecName: "kube-api-access-v6264") pod "56b8b12c-d2f7-4d6e-9e07-d9952d5c897d" (UID: "56b8b12c-d2f7-4d6e-9e07-d9952d5c897d"). InnerVolumeSpecName "kube-api-access-v6264". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.178727 5099 generic.go:358] "Generic (PLEG): container finished" podID="56b8b12c-d2f7-4d6e-9e07-d9952d5c897d" containerID="b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a" exitCode=0 Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.178808 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tswnb" event={"ID":"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d","Type":"ContainerDied","Data":"b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a"} Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.178908 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tswnb" event={"ID":"56b8b12c-d2f7-4d6e-9e07-d9952d5c897d","Type":"ContainerDied","Data":"258ab6afc6e5d872ebb158f64f9e2aa5a1fc1856e4d301446a89fc6b43a7fa76"} Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.178853 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tswnb" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.178968 5099 scope.go:117] "RemoveContainer" containerID="b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.200155 5099 scope.go:117] "RemoveContainer" containerID="034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.214069 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56b8b12c-d2f7-4d6e-9e07-d9952d5c897d" (UID: "56b8b12c-d2f7-4d6e-9e07-d9952d5c897d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.219029 5099 scope.go:117] "RemoveContainer" containerID="ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.237158 5099 scope.go:117] "RemoveContainer" containerID="b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a" Dec 12 15:55:52 crc kubenswrapper[5099]: E1212 15:55:52.237596 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a\": container with ID starting with b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a not found: ID does not exist" containerID="b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.237647 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a"} err="failed to get container status \"b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a\": rpc error: code = NotFound desc = could not find container \"b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a\": container with ID starting with b3a7574e5187259e51c295080bd423ca316129523efff57975dee61b5d53b39a not found: ID does not exist" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.237692 5099 scope.go:117] "RemoveContainer" containerID="034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425" Dec 12 15:55:52 crc kubenswrapper[5099]: E1212 15:55:52.237971 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425\": container with ID starting with 034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425 not found: ID does not exist" containerID="034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.238006 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425"} err="failed to get container status \"034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425\": rpc error: code = NotFound desc = could not find container \"034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425\": container with ID starting with 034c6fe473e0c14f912c612826d05aa3b0f45f98ce1c32e41772f13ccd140425 not found: ID does not exist" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.238024 5099 scope.go:117] "RemoveContainer" containerID="ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836" Dec 12 15:55:52 crc kubenswrapper[5099]: E1212 15:55:52.238359 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836\": container with ID starting with ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836 not found: ID does not exist" containerID="ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.238388 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836"} err="failed to get container status \"ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836\": rpc error: code = NotFound desc = could not find container \"ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836\": container with ID starting with ff01e07d256c881c7c8e47a7bba02248b3820085e272d90e5f540c06a2966836 not found: ID does not exist" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.255597 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.255635 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v6264\" (UniqueName: \"kubernetes.io/projected/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-kube-api-access-v6264\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.255649 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.515857 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tswnb"] Dec 12 15:55:52 crc kubenswrapper[5099]: I1212 15:55:52.519720 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tswnb"] Dec 12 15:55:54 crc kubenswrapper[5099]: I1212 15:55:54.474995 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56b8b12c-d2f7-4d6e-9e07-d9952d5c897d" path="/var/lib/kubelet/pods/56b8b12c-d2f7-4d6e-9e07-d9952d5c897d/volumes" Dec 12 15:55:56 crc kubenswrapper[5099]: E1212 15:55:56.466744 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-42fpw" podUID="362b85ef-f3b4-4656-bd6f-567457c085aa" Dec 12 15:55:59 crc kubenswrapper[5099]: E1212 15:55:59.467643 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-w8hnr" podUID="911b7734-5d38-4d7f-b31b-a951d59f3fc1" Dec 12 15:56:03 crc kubenswrapper[5099]: I1212 15:56:03.247414 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:56:03 crc kubenswrapper[5099]: I1212 15:56:03.257722 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:56:03 crc kubenswrapper[5099]: I1212 15:56:03.266821 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2sj6_76a2810e-710e-4f57-90b7-23d7bdfea6d8/kube-multus/0.log" Dec 12 15:56:03 crc kubenswrapper[5099]: I1212 15:56:03.267146 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 12 15:56:03 crc kubenswrapper[5099]: I1212 15:56:03.277396 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Dec 12 15:56:03 crc kubenswrapper[5099]: I1212 15:56:03.284230 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log"